var/home/core/zuul-output/0000755000175000017500000000000015136000214014515 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136003004015461 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000215623715136002634020266 0ustar corecorexikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD pR~Eڤ펯_ˎ6Ϸ7+%f?長ox[o8W5!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPFS]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!f{g6R/wD_tՄ.F+HP'AE; J je&ϟ'VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ie17^k(>QQ!'7h,sF\jzP\7:Q~f=rWmd'rDZ~;7Mn6s!9^BG`#hLҡwo˹v>7 _:+$߇v{vzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHcI(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Q=1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirQ qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*F!6~ö9M^\r\ߺnqZV@z%=\#|-3ڝa$ΫM|-LsXY r# v&讳YE 6X̀v"@L'aE p6mD[%ZZv'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjWhUuta^xN@˭d- T5 $4ذufw}}1L @5DO'h‡k;q 7= `!6зd B0C?]lja~ luq=T#>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwOj-25Hݳ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@>T%TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKwUS=~O:*xj Y %dRwoJarExfKB4t@y[6Om *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9p/Ǜ>Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXoa%ĉUHSR0=Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9ƉwaB4s=Oi$LXIG zPzMD{]4ü Q̦ Q^Ղu ;` .Тr yFBQ#C`Jyn,m93B%Z~O/_BKCQϰԨ\uRT{/;|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧonxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?~VX–8&w@a`@/t[Edso\wz|In;3&'v]gخO)0{ zz2 堛gZ:ʆy<>b1Npr~3Ӄu$J!̖[89B*y~NʣG-zS@Gmsu6ֶT2i%xG]eΫM '콬)ϖ{_kR0gldd]C6Q6-2hS"چpX:Q|I(# lͩfSGVX: sty> TB2hKFTQ5<p6+:/>CDޘGٻAL2Ktҙ2UV2::-yPMy:8!si]DcIh)Y KËOkY:eѸ?36S{x3%uU^N'ixn]ud$6òBO>Za_Xba_B o)$M4Wd5Ne$ͻqB2i"naHXR 8g,ni /Q}>gYA%s.ham@h#@W|UӀT}|s#2GI(t2)&3w!̽U`~2̤xG$& M.p/iy_49(tT2ߓJ0/m?*'ܓ.{w)b$;r3wUCTŻo$ p ӏ`:qIhǻq  =a ٸȚDlu7V4A#"0ghY etGRɝT.|عlv5ܞ/2L=BfmsXӞu: r<0EkgkYtU)ʘ&ɻ@<#ΐ% @=82=q s#n0)kzxRJOAXۛU_$w MIDLEq% K=H~ Μ&kRIcf]H_x3(p o矖Q:'Z#1g /N_rٙY&A=jv ?t&p']oPx{>%-`˅C*&%K׉>|@K`^w ˨5\Jfd]r׈;J)3kh^}C]V [gHKYy<ɱ]&8AN&[|5[VzӜ/N߿>]owXWփY^Z)YLQȞ\qA:x=RHQlAA1w6||{&B`#!)x{YPeeQQ%oPBHLv'UU6' dgXgodƩZAB򉵂D/IX4TOw6x9_&pJ9U_瓛ݨ*N84߬ճU^~fkWI,p1EYGt1U˖YzJ(~kUbX_ali_hJ̅#Q<+@ʤ5P W aQǹ=K=UgnD$.wvWSmrsx k67VV |G޴7oOnԦ=f&!0P$$/XLmAsAZ?R1<6AYsNw߫o;jk{̼lƙP @EP{wJ7<+C u^Xn$f:>75$7KQ(kgD+(@N[tZ4S흴u>Ok:>7!IX"A2FQ)!<{x$ÛP Ḫ/|G}U2_(YUw>9]mW{k]*Aܜ@eֱ}: ܻw͉U'VKo}Xb K}Qu3& s~wvofx-y"Z/r?dց{ը}XpяaeL -QhE^*=;it;m 8-CaݿV6ycQG5Ĭhft3˳Ef!{ Ë.߷*bd .v֜v*8FKt|f`.!aA`]t-/`};ܩNٰ*j}0U˽ jɰ&-E)Xdɨ +D$ߋq{ζF_!&U=5_33\[ /RheX[EZNOl/M!}'}iE`{aE qb3ND%'ӱz TP:;n}rK_ldb"sdy o [ l%E=A$LVwfб }7'u#T4XsήƋyDoWcƏ=q|y߆9;º ]o:VOBRu  qyh˃7uҏӥQ`(L")WeY}-6ȱ 9/+T4H{#@;S][ar\7^7ms)gɻ$oݭ`A dtK}&XxM!xú~>=髻S0a`fق҉5%>\X %BSi2^~lJ|z_|V*eMc(7 5ɤO$lF"]i3͚%M~Ӿ%cم`wT<4EӾNqL2H/2JF"EcNM4M`ڔä> Gx1l`{{J(hjnʵ&jn+=x>t]b"wW>;99vVnk@0|%_}ng.ŽBJOM?ӻL#r:s1 nIWIFc{%LռlF>՟E;(Z,1hA'OjD#GVi qZ,pf[d{2R_}}-\Ƕ:NOb{>mT2iCy6S 0XtQU^oN6;!;)Ʀ(R"7i 2Otê-ӿz d8:J)cb)b⎉2a2&S>-HE`5=~2ݔWٺߧ m#9U$e uWpVhvrR"f/ZbPBȐ^|X>]<z\I֏')z#;T>nn& lθ!$5/%HD2fQ20JeLy8Ǭ E[-S*|oãUrOt*_By}Cj#kZ"DUIowU>I2nIe!wKj+#KFt}}SlbJ!"%?%)_a7s_PjWs Wb.?![w1Bc:"hu1MJ,)oxᗅ@)F/^4޻fC6O00Ffѩ-Rcr&"&*)`%Ae)MVcuBx//j&4}y3\71Ő&zuԏ|ׯ)wyU`rov}e{U Y tp)vU݊7it P+ŋR2D.(cR.ͨ|(9&F9^LzS'|$8I3χw&Jnl"Mz_.G - /_7/TS5UNsX!$d?>WĘs ̧K S \1<)"ۊB&M:ku*acKxMBI_#ǂss_=4T}La4i RRm+0^K#ӊ!,ZDx^F 0U#Q8tjT.҅ 1,j2Y QuH Ύ]n_0̎`7rdWS- (D2.}.D]Or}S? {;Zcqߛ6xm@m=Uyzo%pg/cco6Rs`H' GT5%:zZsb?iJg3X7(l+Cju 0u8j exU}W:Y#/7tLD4jF};qPU/#rR;a+Mqܛ7;qpUۚ5Tnjv;;64I+Oq&wt|RM+T>yub6()ęJآSz5;)Q_EAVح m3 Ō7$j1\7XD&?]\9Ȣg:$1`+vur?]8=%Ml%.İȖb?A,tpʼn)dk!SV nm=\ȁ-2=|5ʥ zi 8$ s8aK4%V\ t!Lku`+=% h&)RO*GUڇvI`b0ο0qoI`b#FOf_$q^!iA9);q`F:E Ec|֨r[RR1҃UE(Av1Xazn2a/"(9/L1X]~r9_7*rn |S.Z K9mUxBa"'4T[^2.9ȱ ] QAO=Ƅ`!1Z."ћV'rX-|_W8ʐ߯C{{K10>"=a'Kн|~+:)tpՉ8Y4cGT~$)*517l;V 6|~AVĴ{3Gl& Kq+%*?V6N{I_P:9Z\"t%>7ES5oà␻,?XAe0bX@ h0v[}Bf*Ih Km|6d61D -':l ܇Bz1U_#GXUE6u 4.ݻ^XڋX6|`zR$@VU^scG"i]qt)ǪCZA^jc5?7Ua,X nJV!; qoz[X=>NSAWE92g u`Y1%rXgs+"sc9| ]>TS"JNرWB-zҦՅu;3OgCX}+20G.@U#=7) ^EUBuYCrçң~0Ac`u0R=ljR!V*Ƅ\순*O]vÞr6 g _k@BS %fee}).~n~%r~MMp!~?~?lkdTc/wIA>px|ݸ燇*WuxxM?]g)EuXr|Z=T*Hmc 6~\i]u]=ݛoVb}y%wRwOתROmqtuO{ߟ+[{_uOq?u-|?WS_tOq?Eu-L_p_Cv .e ϿWѦUt׃wN`4ڄC~ uނ`b duhq[-Nk"-Kj'32Dz O\!f3K.qx):.qS qYқ>W Rl{y :gkE$"YDE֠Z4xK%k.%tLv7Ull- }c| ]| ęjnli˚| Id Z]0hdmD>hB֡#-&tWN ?YN: 3 xH "}C[ kӨAG4eրG&/EV$Ժ?wϰ:@VcyBFƈ?\H(m!?I#bX9nW ՈRcepO}[ s\Ve;]Oq%]X-RÈlб m5^ AjSؒd 3]%j|l#*˟ R ˨aRڛc1w|o*+ŘOi ? lT z+ZU;=eT|X-=҅CrFwT#\b~?/>, `ۢPltdr$i+tHk 3tl7h#3)vɱxMSLjnKȻ \ >ls&}+ uf^b$2[%ֶ/:掔i2lG~ V85FfwtRZ [wB X16`aSK_!;WK_3U8D'+hZ9| !8RO;"w O@C;'>|tL#LjnۖF+B9Vy"UP۾u2Ja>0ta.]1-{K1`HbKV$n}Z+&kv'ˀ*Ead<" ZW?V g>el\) Z.`oqD>tDN _7ct0D f"!!N\-8NJ|r^*A cist{=xJOd[s)t|2M++XGX߇ ѱ LKV:U}NU*7n-zߞ_EAV$4 {%V[niE|nF;9&-I,t*qȎlo㠙ܲ'w'Xq6\X)ى?Nwg>]dt.kam }Bޙ>ǖ_J ZJz܅E9t6FZXsreHhlw+ [Cr:I`+CLع )%ut\2+A!"Lhrٍ L.۪]ӵ sY4Ew`[x,!9V"R1I>aJ` UL'5m1Ԥ:t69| 3Q#tU)16^Yuatdq>*cct/G~- } :OVA#&<6JJ4E88AOec ܎y,()i7-Ո: :!8) B 8dJ`3ccK/\ct!&i㧃$>lĘ_E6=W|$/ -{$1h.$LG^FbKIjdHJ6S zp!m?e8 "(\N+:Y¦&"u8V䝏)@+|dvv?juj`6۾h#ZdXBF{Ɩ;!87Dw-~e;G\2Lة&*cUAN=Vޮ$D >iԸ>U{s]^l`+df^9c5}nZjA%sPX!8/G:GT0^1Yg~?ldTc/w2MjګU%7ߎY<eyw=xU)w7ߋߦ RۘV.nC_}S}/绻y0߬VݽǛEu]>\_W7uw|U!rx%Ϸ?sH΋P4w`[Qj0ւSzOAР]I,Ѥ1H|T=cK'X iצ) e5 gCj4[5Ui/N6H+ȶ!6g ҫ{.9Uʍ.6_~Oz_Z p!>;d9OUo_KR+|t><-BLqWՙsoY5mi^\b``Q3$[+G~HC+ػq]x~lԳ39iilwd(u#y_AIv4 Cmݨ{nG @(R~hD1f2=xLcUTpd9ZAǪ vrβk$1Z\F$Q4*x-%2VEs7 jSȫl;͗̏U -n Ǫ H` Q T#ʢL*Y66BcUJ\..TގzKNq<(Ú|Y=N!p'97e?"'b8c@28/=2,o{<%e+FtJV؟A ^_Xe/>"e4"2#r,/^W.Xx! ×u,azY\EfL ޳6GV9nq^E+MZ#6 w5Kڃb(dY| bQy~O'ݕ,J_Ez~Ш FM 6}7{<0vy?$j_umdԏh@y$V̝(KOo/L:d^>ħ$}}Ofwzke[2 ;4ξQ䆏w!̢*i`b?QJFo ),%=־s5.Qk*̊msY>umE }jvĂ&v\nٖ‹đt Yf[? PNדXZ~sDEak2Kga dE#.eibKa`=FD . 5eaX3JP$y!P("pQS?d(V9IA|bdzI>5 וՇ0OQoXs!&.׳{eTjV]V)T-:*{+I,IbSZϖ5ٚ:5xSa Q IXg]w+Uɸ~$MK+(!M+wޅ?FϧiZ =@!8)b~A<\<,'IsVHUAy(o&5%΋(y !:9P8vb f?Uz4u٨j_Ga+Yþ:8ݕW 9=Ne+RU}!cT }'I/EhkHݷ_ꈿa%DȚpxWg}};7oX(*–X;(Z+Ԯj=-DlaS`tk2H6w_?xR6y Qmr8OezVas)S!?;EQV܀|8y{`pNTT{~C80>qײ<sцvvw(%m@}cݹVwɄMߧWַs 98twss|~Q V  bS'[+v(8~0\:mβ$ڈ8jKwo?k@T6͏AD%,=S\[ )Ndw  =`g?]^/Dty(tw l,;T/SqCXn5rSǫths4yڻ-vmX stB'gry.6׫6qRY(ƂjvtN >`*?iM,.虻 75cB`Q>]%E]u2;UqPQc6 ]`&6JyynnUxgγ=ځP'RcށEU;yI;@v{_C0}^q |D1K}`wN9֧)MB &ؙbCRU Bb]sad*ڹ=칈<46cW/KV, X藓3޿{E  q,7KAdA!oj'7^u<8(e&C!64@ LoH ] *ysQ$_U 㨩e='Ms1b`)UB0ߓ_zbseOi 5khAj_ n4yb I:Kt\2dS*J({R,0 0"uqH,,)kj*_ԃȍk.<62QznGwC|g$ZQ8hQh>CL 6#m-(e40TE鬄Sf3X[@+ q$h|vQChڮ<).m[G{ d &D1Rg+%<)w+R,D&` UNq|[GM6X0V@U.5Z1x6|$&8,kD=eq`O8G#¬,񫤒]+5 5E.c7:`l;<yoM~ˁgQ4> 8@% Z7b[1r`C>ڈۘ(ùM􄳮ksy{x!]T ۩v+ص5#i*fZk5o`B1A"lsXPƂ>~X[W ~zH쬗?A~:ƲClfy@PyM8GmAWʹ BByHRVԕXq&uA&հ|< [S=/a_YK5%k[-T&)Q,wtfjIog/x{GY#$D^9|Cl .z7.IndR2>LI9GLqFb}UlAfdލ7)H az4 HUq)>7"W>5AҲvl*q)D D %T24֝&j1\pnKLC0앚'MX Gz\OuÓ`)6Q-)W`UvK]hDM=<Ͼ%y65W (P-ukeEpW+æZº"EpVI"!s2'j)S=-+Beke{h(4dciˊ?8  g!Iyzl4'yw6 Gߍp[z @8C33cb#n}x"S\vn:AS4^W{S rEܩ{ڄlC%_xQTLNB+8|2Ǻϧ3Y]$\D)C{vgSk2'ňsnSpVuªv̌*R' p= c9oAчvRьϫ ʞSe_Ղ}Mc BX̣~;ā+8 1s-yFǸoqs*vWhP֚pG-=nVCJl=rSWoܒe/6ݥs_~@&B -U:d<]{d#Oq.tbVh!xvCxUи~S^w0(8\2ڊ^sj̟ 2Hl"yXrk_e=V3;:~b*:\`.u.b*Iz]n6̣ARjkWY7eZAX*R Dk'z Wޙz!z%\*E2+ rwM2QD MsuG˓22g|c ]K3 a]aUqxОMpg(XU]KU ,ΏGR 0徊[H= rSZҞ; Q9cؘMD3iĮ2OWTFOQAqxofؐ9$Bd;=yݴm? gӪ=ɲH%OF$g=K< `]pE,ݨ R!UԘZ2omI#VF_"JViIy5?Gw-ʤJ$`*Jkn,gw?2SsoqؼnhD-٧ζr}=G-a QLɁ&Q6t+¤ogiL5e[Tu;sz,O@ }$}H*=WS ֪gUZopR)s IT& .fR'"9H#:DUo(V?,#p~&F=ڤv5ȗeϸt[ТbX\"Ty?0qԱY\n/FXhGO<\JWBbܲ) K"qP~fE@0s nf1o &^!^5% @px}0nOK }@ I bqݪ(^ar/'r;z A%h<E CT~LQG㪸e\ ש7&ByV޶G zv83#"5EX}k"(cG&4UZ`M} @~]1,oZ>KDz6% ? `B ~~#'ӀECW£ }sP9!(_b#w^#Ne@dn?"qCͽNāgK~vb|(⿼\zޝ=}pl?&f1:}s4uir4L8V+{C>9oah9DAth_dD0xuRA\wXi]N;.$uY6-=cx*ԿGD@፺F5[+Hdb}H( (DRj l[1%q9Ϫp8`fceס˹/:)_|A]y,ۯ>!I@ xFl@-6uja:*A ػd㧣Kttʾ^鷎ٷK sD`N+JJM@<^ٲCg`Q*wv۝rJ -3[G$&JfxخUs+IȞQ݂>M42UH%ظEh)tkdynEfn̑"?n/&*x}Pye/" TEge'odɈ #|ӲΕGgWk 3\-k%iu}%lB?24'cШ ٸ"#Dĕch}#--X7V7d77s HywZM+7wZ։凰C 2r6IYv7Ԙlw;),#ԗV U^# }vfXtauf:8>-͓<|/`#?zE/ r ~|y[6S_q2ӢKж] *틦[G\7ߎq~ЂT؟^eן_]\$"=ƫS :+-꫖"F}g1c|jN:q<4&b#-0A7|[ 7c܊/ ec;R^&Q15? @(vỳ\|zf5 N;.C 엨"Li:ih',Ծ|a,ԆɱpO̳\n3N?((Aegnk] OhuS26,Ũk?v@o ilS eA6q$dۍ=]}/GL +M0zd9l3]*>~X~_0/rl q5FjOA6~P%-kcTB@vzt._օަWz8xW (pG!HWy֦ks7jԞ_C>q (t0W J94`E!Xޱ**T!OC 0L>T[mI{ꭜV;x pZQfhȻ*:XB:H.{mERc,p xlj*PQJxy~!y%J nO(}tGB6 )fLJ9ta[n5 cvft*l bǑvmAJڏ#ޑP{ B*| Bw$oAJ:#ّPg BUB-u'}[mA=v$ۂPP B G#[lOh8B 6KsB_Z9Pn!uF@ tx*n`8@ʪDu6,s_h;YTcy^s :e8Ծx;cV$̷n0_L&axeMtvN2^َui@O6<2b >&PX,4$Ҫxhczx F.pGcAo(NaV (OxeSI q$VG]2F`9BNh? _`!՛t#7D\Ne`MvVZ8AN` 홈ֻ5H%{ 64 QgI.fq+pޜ| w0!vU_KgdaN!9 t7L5+txrqVxP{~K-q̖X>NfD68&QQ<OT 喥CKOx?l>%મe:H:AϤv =d/ab p,RJ[+$c WtOHަsK2z gP'i*MI@}WP I{eTOY^:t1ڑ&N2oKQ}vS5 O=ߴ;2i͟~t^ ][cu<9$U A=U Q,Gӌ qu*nA֯I2٭ӳ\j@c@t%b0FXA;uQޘ0ط& cC|iGeQM!ykA J |BLƥ.x(OVUw8}:P92iarhI ߱Yiu 5N^;f36EV hnMmPhk7 AV7k/vEeyϰ_` H{m:8+4w! EoiWͭI'" hFX ? Yd(+րEs9c<_+,Kx\4 {Pg{v7amT#+iHBG2WeL\ La vE݄wkltuSKn9#1r :oxp'[V`9%mƚl k-/Vt R*+FѾmWءeoBDTZЦKqWB7`jy\8a1vir.-24IklYj:rϹr@Uim4.H|κ} z!GB'"R?v ]s"BOo|HypVK)_ɒl}pMxu*lZNm?b$};?Na޳S?|Af! E%G5'Έu2X .LO '6/Le)K jEk0^E=̦Iz!6HHs;m?Ԟ_?p~E S7b뾑05O SgMc"a~OJdqS`#zHg:<,ؗ=0y_ O-d`FHVu(E`է̸_mOy<3>MciΆn k࿆g-?xsmyaݴ{nioX%B^hFGM|77{4ɟrICqg--;!yoQd0^)Y0)$s&p<1EjJ&a8{)CΉ*֞ۡpj)upY8#XRQ8kEk@`]X.e%!2#Jz|OILIc1j 7HQ2 U(,qiy%Ht:҂8 u y9J,i8R$DZ nu29ek3 s2GЪM:E _w'&Zw %cOƸ */ѾZҌՌΦZ+x"Xg.3ˍVV$]l~_8%,hzzn#S8؋`k̓P|G>2BH`%ц KK]~'zy ?1?Wq5wODkM9 )!Z2ph"Xlu^$SnYeg7ݴ}ǁSW̽)PJw xk_/b7*FTKhslk}kec FC"=|YXa ݏsmO=F.8e!2Lץ% ܅='/4jQ`d.~ 5x&^ 8vƚe(y.!W"~5$0>wBE9aGZ"j1xȝU%롵:潕TS4\,EKMhG{-X {_\9xj؍^|e#O;|G?ty'^V >2^o^)&&Ψ)EE qE#*zv xԶUYmma,2$FP3 -:~o;Ĝ,-PA(WD~Ɉ,U:16-SX_HNtIyekki[޾rQ-g?Ֆ=,q#0G8L6 r7$-uw\cj3x>ځseڬ3/iƭGaBzowMJuqP]FZQ4 UFl$H_v]ptzQk_v"#u5%ek~v`kg|f 9(3>}= .b?5Onn4ÕoKJt;=XC TMQDA:_sL6r5IA&[Nl24og[FkPWn͡K!6vr( $`E\{V1wE.oPI .C킣]?.Ldz?zk$(TڹJty삣[L SJ;'('pVt86|{s:@I7M.'J+~ )jw q],[7:uyEGCMΆDl;}wϦ jIt'#0<Ĝ MQR'M["HiG"wn?)!hGۧEz5{$yl~fu^hfU{T `1TbȉQ$]r5/kusۭXnj{h?R赐!\X9@s7F~>euL |d^[BʙP2A62ʿ=t7$135vpx<"Up<]{|QֳYJ]4w T4gJ $>Ԝ$%VA .pL.8:1s isY)\ҫ-]|?ZZJբ|ȝz-zQӳ k֟m-̶R(֯c*>jҒRސ \UE%޾ukB6 3A ;]"Ne}:`kV]znXx(͊ܚV$cPeqHby;s<1\f-e#HJщ"BeJ^={n;mp꿢s ^%tKB-I('dp98K=pT|YWRb;6CD,# ĈHeq{_6]eF:}V2az^.y-=I$14Wg@e2= = Jmˁ,#6[#SӉ`ԢvI(ߵÆt4W=nY3SMh鄯zǥ :kmH !`s^d!\npفu`4a%(y]j(I^X"_WU]]3r֮_-g/xìpwkIXޒe+R]=-!]\X@I#זWhWF\)|b0@MfB"<.2 >".br^_2[-UMkyFm9λ>~GOKw_oT;y5=DV mCͭC7x3%HqB#HP'EJPAJK_vQO$n>3k_eAA_y׬kbŞe4N`򻁭AJYa~Q;'0dU۱t<cd%`8.b NJ*$f<;p|]&2rQP Or %a8*D?O:w>_ᯇF+.iq6;ʚ[S>gNHu7!Ge1 dIg0fJ< Bgg~E(oҽ@|k.ul~o0 HZ9l,ZN߽菋~( G LK'% < C;t>ݲ& %$œ΍$,Z:5`ͫ*D@wR})dtnT_t?ެ=ګAs=s{#= ^ߦG?1yȮ{ 82~f;?69ѽ*?%p%\K")48)\0~2E;U2{9<@~+\CzIua⣟ʇ7yK~;WM;>NG rq({1ȏ#U} Dw:p2OP8'$TaD=B#ԓ|wH<epeb%GҒ5{l?zܕ?6 Ľ9U|TÖv"?#Q}K NiSrfbJUO,Q 9n[\s:&+>rF(r3֞Ywv(3Vhը.IT,349MK&=J>xˈ)(7Lq]'˜x^+мUl9'a ]:X6+z6M ֦k֞5U]@o6PZ 61պ bZ،WGna&mbc*[-S| rsZh*ZimZFqA.mIdAʝDMD kҲIEsnϤHViǨMi`a1OB#\K O\$}or?||dc*KN_Pa-r`c}^uQ 7w1r8`7˧,nMnB1Q-؁XڋO 5e@Z%b.q|#zLa|6Q([JZY|ӤɒΆTԐSD} ;,"EgNbJ~w}USN3zԲf;n;׿ MI{We7f7L8+R2{7ޙ>>Ɨ5oVD-"́D* 3ꐳHsJMN~ҩGQR FR/boxfMv炙lCPkc8Y΂|N0#qzFV`jD$".' daO&̒Ɵ&] "v%ԉTaBǂ>B;_J⫍βm Q;$[hyn=+Se?:>͉СRc8-\ǷI7MR[:eil%kF&{! e!8u݆mbjA!b$x/?Ļ(,[n5bt.ۂ8mKsz0,rT-cޚ{z=sD>,~!8"b6"_\AqLuIk#pɏ)kdzf>ׁ[1q{j6oGn׸̡hi~s -|ث| hӤ?sNHw+0|KoLGo7duWP-+]S,*22"j[aט2b b4RM#!6+V䠬kMW,[U +B [fC9Vb9{ Hz u,S_3Kd[Di9-6)ߪThh&kE^17ZnDnaU}^8zX0j1`.ʳ݌ے>PRXB7pGy?n:ŁO~c\"XTPNV$=l;Ƥ)aRz _ł3SbJ#?N]+ONB6p'B686;&= >˃W;-*s;3n1ΉR`}ق)g2SNWRZ.BjITh&Ҫ4J*77!Eu5[ DhZ]5B _qJF<`Ƹ(c ˂bL5iG4i 4C3|8l ,f4MB4e9 ϣ-duGJ-{V֌5Ҍ?_3@۷$VS׌,5Z m |y +g<ũ s CW֌5Ҍ?{ IV*٬lքP.>)*mnQ B*}A?MF! lMrJ.Lמ<|`dzb:!jNr3ISࡓH+ru+uMDRO7~8A {>di:oŭ C9(Tj݈]̻٭`nr,H$vJ{& >J~ݷhJ+_ûtN1p%)Thh6wpx|}UlSr~?ݷη?•料ϧ295chuj"}>|kǹ\rZͦkFh{!6|{eUj`o' mxޮ1*@65RכrͽJ\ Mwy@ hK胍.Ck`^y +p呂1N~#q}2w|)5bQ\1½_l|$,ր\P\5𝹁ef?%ͼf gk70ݟiUk#۟5· `):`*ٹcl1{V!߭y>|$s`d8 >>0 ;'ypWB|ݧ֏\i6(+ٍf׀]q߳J/%,:br-: Cά'-[ܝY@<;ݻ3rŁGH!Uzwgցډ;<=9BB2Oά1ŢVjﻔ>AF/x2k1_]R*Y;OHyuYGjv|_Y*$.cLM$3`nǫwFbsU˽d )S>jWtP8EXݾJFY>^˂@> 8=Y/09^z*+4Z4?y\ʄC&*IƋTԲN4EzFM_"]lno)5Bqoϯ#\}Lݷlo0%b4\:=OhIF!p1AjM)|C2Z5bыTZ)DRQ)f&!J]97T`,7bffJ0g+S )# M((4R`[R0WvCYܮLyFCjEXVYdI0oEduUJ(f8[Z^7O—W#3p/$6|)e{N Y'2l6g@γtcP4>=|`AjKR%bJK8mNI{!&n.M^ZjQ=ܦ6bEyxLNp:af`0f!<>W!lXK' ۆZbJ.KGE-6#j&u+QnZqRȷ]" ,߲ olR pLesR2܃+p/zHIL@fO&㬮cށAH|%’sA_`|AZ1H5K'1 6/fzTa"`ͅäT4@SeHqJs8XjTV2IȲԓTjc Gcjt3@%Wu꽯QXz]>V+]}LY}סyl T[J6ʨX:K$Fs^kj( KBҔ1 )BK&80̪KKTQ傶 ӂ)oQCjy1aħީ O g hV# =g ᣸I@*cC &1\ƪ$`JVϙ1n1޳m$W!IĥM\8KXY҉RR7%%Qd$mrfvv^;;Cxm[bW0XK%X'0)IZ%";t1 cbUKi$HTdTdD@k-TF9L5`B.B.t)K<" 4s(#oF)! CLJ 5枔Z}E41ž5mJ)٦!mZfR呓J q hD`HDj%AI_(IiVQ b˻Ei4$s0G< Si3B2;JPI*!$h%N1N{h){U<ºFm fٌGB \=n'ng7w8K[-_fCwP>nMo/HK6J)`E\qܽvA5_x P~|2"$ FnL=@Zup]:P\{19$Mf8Jc8}ޫ%zE1.7SHޏ:_op7@˜VVbWqDz1N/ eCr\OeJFUќj*)+Q_@JeJ~06iϵz5]^ޏF̇}): @wktu+b0{9e/!83WSP`<q\Zu~un>1p*l~&aJ~v6K`)3/'۳C4RâRJ@IG/O@jbhd+gè`Fd*- RBa OkN(fXJIL %U\Djk9h1ž5mUfpp!f(H\φ.sjJp6NMNIDjFVu׶|' Nmi=n`[h-VvIu!mnwa0vSBR*͙13ܑYLb|1`+7*©>!8Uyr 3̔R p%;bpfxbvîJEژ·) 1\idy #R),Af"[.uId+!)L05h(eQKCr,c\sa\m!#!ILr'Ū&$CVEf2)&T?bAj- 9CLHTٲYup fVMhUuqJ "M<# X//AiwxfT!M+4 ,\zF`ׅ&uXE< TԀ}4$T)zۗtI"HX8Fx~#a,b)R< M1" " Ie23kIL)QlW֪eZ\cPZxA%n0K`7X%٠HwAp-jQi-(%BWS$h`egj}КW0ptwM(t.D3Cv+}&(ْ$.jLEVE=poWUl 3-ڜ\`c:ô%+Lk%'$)֘5ϻw|ȋh]Aܗ>r^O'atSz[>0)Zj%,{'t1]Z>%\T {=xsT^&bTLYvkxʼn⠑-MԉٔψSRQ|N:Vqbl͝&@vjSR~@=mY׶l5 (dƯ I9%Qg9,Zmix$SL&m]XO=ppe "PVG8 U*-=lD8a]TVfU(# 6}z^V.ֺGdzu0qC6p2 =#bb?~:`JF0״eZ \Ѷ*B0X-KɉN=B}A`zf+1: \ux3RO-˜CWi-kM[Fji uy[ c(m=pmWց+MtT;rۧ6``@`hkwx3NlkѺj{+|*uvŔ}gt˚8o _{o~a*xE G*I. J wD÷af Bz?-3YW>ǡKy=9|bE_@qffnIIIyV6%h~>;b(?z'z{x5?xN+Gڊ$h1~1 LCA^,'q 5[Q9ZMQiRK5Vj5^\s΅$`[~+lEGl|=|f-oǀ$u:Z||ƀU&kޗd-G7]@+w5U9|Gf5aO/~=D'A1A`wV*m'*emܝ޶zdYmF`Rܟ@ZvU>Com( *MW#3Oo߾.(k3?=Amvzi8|dxw-d\Lu2bwbY> '|9سz2 q}-Ɠdfp!S Լc\/~9\p .g>ɜgsZkve)?* 1dZ^ڥjߒd,i'l[ L!i;l[#lr n7ͩrk洯 .-/Y^Q7[^J%{x>b-m-?@aUA lFuЪ>T圢J=~R \E-G+xRzʙʣ.3 F +޹Xm%O\9gQ2Z+`K{d ؟G6OcVoMcbtZv9g\3N+[aPgb@VoNyo؎s}/~I2XYӅkX&w ҆Kr91ԅ^Ѷ\:`Z[. jKMpuˮl h 3R>0KΠ(#VNmoTkBnж%AZfͳe8sXKyD,hInyAvמ6}WgFEPK O/=X \d I}P/*N$A:; 2Mwښޖ|Ņ!s2wν̪W!xݩWG-_|ʩ߁ cj+V.iԧ3Vժ!c2Y|ũP}`;9}GU *IS*3\4\")_g(WUx:y̯2o6M[LcdFS2kpt1aU.XwoǠ . W~;dDRG#a T<I%r0!l֟#n0l9{]l @WM5L>L`*e $MZٍx !&ٵw($>$Wub~,]jo$EkdZ@? )ipt97つZu‡]j *9YmfX|g1,lp)zV0ފwwpVìw.mѸ_ݬM_}g]dyb<|ݭtW-@ަc"էbgq)l1.n$OҌw^ߏlYn[ӔՒ56,*"YQ}S55Wd~|_R*oHHoy()3\\7E*CLgLj-璦6e,NNfR>P?v2?~_OX9|X`3ߔ0*> [?l/p|"mݐg۸݁57WF>ÙM"Wu`e퓢dò)wmߕ+S/{֤UouR~%y@>|0/ W)Sm7{_=Z_6μ|'8>Hy8OEŘ~ϗxZ_?KWmmwk ހtBm?Z7o߿SVZ&wX&{.\LffVS57EY5û^ J܇=IJGf'h3)W7H?.1K=M <eaI,{x^,yx^,M[<4#nGɅT*T["$f-;^,߆_k<Ǔ| 3:$M.hb<斖;\s_P0qS2(x%:=m S3rrt-IL<3^8*ieiVnݗ2+k$E EUHJӦJӢPƷc;v֏ ]nNt儽Ng #E(2) L/}EA.'wuR p׀M<JΉX ̢J!]wq}+n\/ x2)}`ŨuN#{bF6xۣzg~sZV98##DW49,/g-V'l \H//U@!y|ʬ6t8VBh2a\jӉ*g2DJXr.}p ̣W,Sb5T)GVux[$ RH  c. D\7ޓ)JPNENz9yv:./wpp@pЫ G0pq Qp=\1_n7y7wO$Wc˜(b*N|"h9^ #F7Q}΀ɼ"`؎#we"u<#_ 2eP hlVd~Sie09CGH0`y'Ǭ9^_ F7/P},vmiIڂ"!X$^[h.eQ}ihF"ޙRk#Ɩ^)c>\(yOZhf#GlDes# :mJ`ihIBaJY'>=Ѩ8(,ӻd͜ JEN b)&jO|K̠Sqཙ:~iVYQ~1k[fQ ^=sr(b>n*33,; ,|ֶɂ1 G#( b乼a?}ѫi?Llf$alYVΒpΊPO(UuVz6Us'B!"d_oHY~t-e.Q98ȪԜ^S g˜3-1%Zev΋db! x011/6[;ѢWgO9eprIs&E?rq|̰Z|e)u(DJ E"Ґ-(hAZhg3gx_:^|+޽[Yn1+A2Pz5U`+;ФơiGT,wr:5iˆDߵΧS~ʱ+]{ppQj@- TqPؾ4"c (A+((6i߼UC Iy<ىH̊$iOv4 f /EzB˭)'Щ}"pOt:fs/Pj jlne:PCHV&{I-M x x_&k9$vH)XX|X2ݎh5Er6[k$9KXUYWX|Y^" ԯנ*@kb.}Ci{.&dzT_׿lJ.RY\Yj`%(ऍ;QDBNBh&v{7ѕNaq$XnZEx؉  Mlңt$9]l?:?Z-=pY\AB/Uh&^-M0VVӸ ʌ'ˏwЫ%JU$j؈Z~"(+jב#m&˴ NZ(Qm<7NŸvDWW<=F MOM nT*$x!(&:g-$>OrQVHT_UH'nCTC@*WN^UTG^EHԡ(0k!hr8Yf'+ ZV(Xld xC0`&!nVRtc3 .z^ TӸF|Di5)YIc4O!A<1mr7B%ȯf{ Jݕ7oJ o|1;Z()5¿?~+ <}Ŋg@i(Y(Bߴ׻/56!/د Uvfv2[yUs5ofǿ8N]jWS(g|Z h+>JϺ#?TYgphhK t0XKU,xZ/jA=UfdÖKܲ1j t"i'k71rKp"*#H Aɥ#!)G\ƃGU~abLu/GW:prP!**\${vy0hMr1fhQ |rʩX=NǨ'#p9ڝ ]%t(s W(Y$ '0ٜ\PAt'G}1}lؤCq\Hz#baӎPxkb2&A3WxO &Roml5ҹdWE\>1ywA|G#刉8v&O O G6;UuF^k /kg3 "bF8-2%z 06dXM!CcIv~:]<0^=u/#V/u4sPE4F=iyiTc6Ca{l(I85[=b&o*xFcj!&ְjP^~;z`K4I8=^`(j1O^/a^x !=NZP=u cf% -4{ݹe|̘*ϡK.>O9GS7ҴW`xzW0ݜn|B]%HkXfu_"wW.Ix4$&@D0/ 1y_ vs:瞷#J[ 9:nvյv(wH-wI|m}]}qy 7xWoc_aڈGv\XطLa=rKJ|ZŽq?( ):9ɶS݀w{M!p{.T]/Eя/+p.{ĈSgOjjq9'aiݧMB]<ETc%89^B!F7OP}٬:b}~Y:I)$4 γ1Iͫ[QLF n2ޗvh*H$,S$Tb66'c:`'oz4Pǘ,4ԆL'(Xw! i`&5X#8kXQ~~!k ڶFݴٴޗM)-hL)_ϒBKr#fhc 'w-faз+B86# /˅L=3qG 0}Š:n/V5’h,-qp"Fa:Ov2(N& /fFS< q|8Vy73RԐ)hJ&)q^ER:H6S@1͗զuQi4tx_Xhqҗ3C&BcWNx,]x;3 U$Y [*?$Sl`KZhԺ[P}Y.K0[|v:ǡ~kg9G8,|BG #'Y{Ǎ+} 79.\|%He%Rv۽{f+!m]Z4o x{RU$zLNz^[n ;en=@=2&}ऑFQ0P^ :h#G[=GV1T..*|HjY^3}qlaIGpC8L7@hט?,1@#qDbK{g;&Ds6oFAs}ڽODJDu޹lIekѾF,b11!R[ l:uXs%)iUJ t1-^!4[j;+DR΁X}~臊N3Q1ܴ3eݵJ_GT2n7hQ^{WJFC3n`nq&VK&}Do; 2RnXW~DvG&2xظz5[|5-&V03| Vip>sKςӂ8YdYs7l@| {ɵWK#̽ Z$nw= 0V$nqhG+^"bE 4a p֮4=| tCA0!3E&t|D3{6~D`WA_/@D-nR1IEcvWٯȈ1(sw"ܦeNFqc*ԋt Wzn0A4z^Rjd vx|~+"{hr9aeɃ<&Ð^dZVb| BsNǤmkY4 ?Ř7vYC$sM( Q 4~G)ƌ.ƌr&,\jΖ%Z}!~ݍ[2Eԗb~vV%4&X ֲV>Mh`"p`G\)Tǃ謀C 8)D{|$ |3.u -tøuev:N}:Z&^/wH5lbrT^pSܚxn׹P?&Ccr q#m8vE[~Wߞ{n!ܧy (Kf:bZIG "t0Xj;T"z=[øƌR#{.oGఁ_ ".1 a\cFy*c={?C.̢XYr^nee0hI5:2G35VZKRlKz=}@jgKR^=-]Np󾿩7OJ;?~uzho Х*caỲ!~O?*]73G5(uEX+!Asjb>`u}âʇU=^[m/O!KpO˿֎`[*a|5> ¾qywܭVneLگ?f_{7_\z+yhn^gbU-O7;^Oohz`,k9Kr7OF jVn"ٔUhpWac ćeD^cR} 4wCiِQGK-Uz}!(oeN&og3?NC=eaS-+.|"[(ra|x;pp zŲ|YԄn^Zܼ}t_o? Pݪs+lZv%|*녳nݬ7?6#[:3ۗ~6?{݈5 v}6TAC6/.1߼+9fj={ i\nvϋ[ݮ;U@fz\pco X:%ul߯c1 ;ϟۛxuxb 0`.ܼ 񲀱t3 7wo^C~\>5K̃*U_oܪqlŧ?jDUt{ vG?Y@ ,hy7UtSp8j r eGjF_~|@GUNhff*L3 oB/G}O٤ ]ܽ b$p- t8hmƞp0 Yπ0Tpt2qalFeVg;K" /':SK9S =o1Ox[;aձu-VW˂s@$ A:}^dGiFh4Dz|ض 2Qa 1 +~`վ ]ca.)%VOgd/_6rN;H;QJYږg`ppNC}&!7D.A:񞿘g[ŁX6t/9g$d.= 5maHs8Bq oچxچZSb MT y8a5Jey?' |,u-FK|.jm*>̈́&Nj0m9LGdF p3#P#YNc˨}=4U^(,:lOLҦ&Nd.6dG9xq: ˲+M\;>:^Jjt JQ=>.V+?嘖=4#pYr]^A0顑`a|:({1A^=4 |,E\܈$}[&lB/HVQDUN?AaO0("f˯J[øFdt r_yK*e6E C q pisQ'b5$8e9̫ԛz30 /pơyoicn o!e@ASG&o$x[J3)a\V;ѶP#Y.t.gyZglH*yLP6͂(Ar rDLl(^3:Tozbcv u*-K A3`)6HM7 1at!7p%2E3⦤)vISXF˾ʯ ;Q7=4R`C?_g&TZk@3J)UgJZZ6m^C1Ň6z6eS8d*ѫx`gS:6 6/C?tH9BO]y4Me K"6˅/B7kBc ø}:!vi!}n'S̄ .H :(jmN(S9X I93>뫙y]/4U|b(3,Dˉ %y.]LTnh4r-7PcY."X`@ sfQF5t_p]ǰ wuaìյ\s)7v¼ь )YI]u{{yϵ./N1=Qc*T2LUuU%T)B}1-*iq>*c4꩜ڦWFEe7E4` ={vz=aمc* [/*vǓ1uN{v N0L;7_m=VSN'(7Yj9- %;O;C@o›yf.MFi?1OcFPc٬D2ksl$Q \ܽ=c`P+NTV:E5*"/;phYb9|$AWgg$z2;MoIemf1a.d0LD ug*hI7E |,us9`i*=4v(6aP}(Yk@( o@vhJg'P@d*gx<SF¥@U|%APsGIMfn&+}uC#eN,̨ F+6T/ ֔4S&Yqh#pDz\3^??ܙJ=4::})zH`[/aӏkLs7c""!(HM6K+(q˗ZϤ˪i֋]> z^;LџDH]$Я<.N" ފAa|5a'Mh}ɨd1|q7[|r t*JXe-)d`Y ^Gi'CEu׾PEP0&C̓~VVdGmOO}\k}\͞Z*u?ut0fp]HVg 1饲1_֌aȼ;cjݤMQMIɤkejrzh|Sc['S_a5+mS\q{u\Wd=w. 䓫qSFM̋$e̿·sz s>>d72Nj\i$x `> rByyΔYE7wf >R@sb:$E3`sj|XT38Ҽc;9o߾CtԤPM\SXgL*.eX#,hqq]GytcT8媬\ Y=]s04"f?*ΕCØ2#bWmj4LxtHt:ti9*ˤ:kܞ'U+Wu;]"Dȏ#0 C0BX_D9XmzT7R#կ?yR}<ŨZ]A-~4c/=JvLqhpXcFkP|y/Uqġ kx*!`PUv[T92Akp!(eȝTަ}:si* 1ƗB9 h&cV8,'^Bd4R~L"e`zI('reAI;[L19/FbS  G`Rcd %r3]*ġ?e qxY E.hܓLk3HHLQYlqU" pvY/KcMb1[9|ejT@]TiU |䇼h;:e;ۏ?Z"Qb߿^EE:^}9p5+L`79^ zI/d;eD Rm [ L[Zzt_{Eo<KiSG1[+3i?k.{.0-~`ڵfTTV]K~|I=P%:pX&~\Z[,% #48g4(a2:R[*9p"dlj2${<2Y0 ׳ĔQR"IL@p Lhq nR<˔ 5EK2,$ Z"X-p@ ifͥ?U}óg$AfT+;YR@vC}+]s_p;l<S]G0 l+-,#Z%Q0Vey`JwX PŖ5YLj ATsSAG w#0%wPiH,.(y K" d'e@-uwL :xf\gB@gm ȥ,)H00I( c3gn0أp$ >6cn`q$1|$`> YL y=T=a<2 2(͕qQeHˆL.{*^ RPcW`1.I5jL/#C&>a$JY 9e^U5veo'~\.4$@e43'e SU^K8d;GG)>Wp#O? a8 T߂$Yn)/2]/vjuۙMLW(JpihyhA4j܁2q,d|a=3_BLh2gh2s_˴5^_e f7o\02J҄H!m4%*s$ WQ-̋z\>`;pX&eQInT,C7/<2|8,gIE\C^~P1'إH"㗡wL:ocj ){s6$;B[D}Qϣko뾰[#TkFIjk; Z-q?K$Y"30z10tN/W"8N]i[ po7 i+Ϻ|FE!S"&h@؉a' +iѰ:GQ%MJZEh(߆(`,GiHrw9՟;9zzQ GQ]=>QlE=աPԩyČa y҂ [`v*챱o;9 bf P],EgG<Ö52Z̦bfzW13pܩ{6Ke3rGrvT[Lt mn{rn}swZ{æS)@ hVWFb`d~$a oh)@(VbRRt. ܜtk#f9 B)4'`IOIl_!3Dš7BfN!Ιẍ[|}6Na9tmbfĭE&EVdANMĄ,$:%9M^G|[yj4EW/3ol}Հjie;ȕM3z+|qFTꨛl:I''.p6y -{K&}FyPK=y@~ |V_3Q?:L@L\pWgN !GQqtoO`Ee,:Dx2R P'V`pŽ=0'2h3!glT maz`O^!!m0q*Gfܖ8uuWu}ӍM2@u2ҹy8Qg;D5Wg!|ېl 0fh1"_-tqB1 AH<|y Ō#ߦ #*n<\w>qƟI<<3)6%r۬ 5i--SڼЙ!yV,Cq"La }\WE3o cuMYΦzU?eY?XRvOTREvZqPn:RvcjNg ͈nrQ< 1g`:Y,S ky18Z;|PBdb dLRR@%aq992 8=0$ί"C fK.;xaR /WO'R(r2gdO[ YW̏snS|Ls !7GV!IBHTo:a;ex$sjR}%ϋ5`b f;3;FQ>L Wr$tPqWjoBaTSГuwqy|ԔpTo>緳4+ujgh-8c|,/Λ%fMzw߽|+fr:e+ּfd nVti2aQbլz5u5\D,ՏI“Si"$ӗfaпyLu S-&`O ȃ7zԧ1ӦK)lz'{|{zg b lټkxo T’b1[fB;{{{@[oΕC$O1mQ&~{ ~E2>Q1q[&Y ^7((h7ڊ"A<0JˏaoLZNbFX`|/j 0 ,wmtNs{/,j*o{\p.Tϰb?o^ӿr>頻Q.I,4di'>, `0sg<=_W|iz3"muF 0gy=1g2MTr ݜ$3t WUȎ;Z11B;%;_~5d^w+H1>_8d+R$*g>tbY_=iZ4'o Gu]Б^B0Hy 6nJK1n9 MG>DRivNNc-_WY6Fqk9@k/ {oF GS2?YN%ofHVSaY%٤69jP׹ qUJ-ogE&0N^uRI0ΖO}* U7&Oaɇ"?>̏i7_(l6Ϥ eR=0C;716T,/(ogƱY7?׃"<_R}5btz s$og<7>^|PGB{rҙs\q΂(gyOIK)Ӝ&t>dn3efz+ccQL" eR(;?zoWz63no/ZuQUAvɗ^+-s8ìg!%,PLV~rQNO͏&e<=\,_eJCw͟éË5O';h7#(a=K:Hj8aY X?8pW 9LHdf0CUwo߂ƇX-ٲz(3۫ҹ B\l+0H3%jʢGvަ4F0vO;vv9 nĕ_A:LlA|%M'i sWi!Y`:Pٞr};ʨX@n|.Q%uj,/xdUs ǧ1 L#uP(I4M}H1$p,Q3* K+(%WyK`j$M arS7094|yɹ~R\"Cgw7錅L> JJ4R(׹/d4[ڷoe}Jd}X6Ce5Su[\j~Nvk]^,Ѷָ烅E\@BQqp *xH9Dq>͉cqw3hy#\i0ִ^/&.@$01uBNL3r@|(oF~QlG杵b:*זXEuB"8#TS.ż+-lfᾼsW|žJ..ܼ P\P!ou[>VFzf1_)4GeZ1wAl"&$1AbX'-uڍ94v'\? < ϛ|to &[؎@HrCY]ށk"@3`L[1b ^p7v BF:WQ0-``j)k;⑀D#G - rܿ0qFs<#֒1bU]iA>˦8Ų*szJ dgnJ RG Lxeo6%=de^7zum_~X/+@>~e%?ΏHLkOc_YTͱ `,)ќXmOg@4?R!!~ʲEB\^AܮX _ש?bOutLB;Mo>^y]Q۝o$ڳ\t+9(!p$kD!&(ŨEx[jSSm۞dA݊s{s[l`\Ttzz>~[k. Tr@E\D{iCN&zv(b~Jxi ʲ7W{ JZ>YŐ2ɇY)xi6܃]GτV()5έ!9x )K,hn0Karu"v9 nVmU" :^36/KC ?"$fzɸNoyxw"ok) _Ǐ7?^`c I%9b#=$?~Z!+CF8[vk/+ Ƅ܎N'^/}?t&8 ҾG$l52lta%_v:i%ѿ4e r}=̑rke>,exRy8Gjl^7hVxw 1|a8կlɑhKj)2X$shgy6;h`dX)L1/0ռ{3A6aHK$a@,Z,s `hk잉!'p}l p*ly+/!C .y#s0(R⣒XQ=GxD P3` ¹GsRc+.{oJ}8Q3Cx{w!P 3X#iS7t>L`:&I[aEz9Va)ōoC5"3}mĜ0+(qü31"qrk3S/3Yg"]Ct3 *DQ{&ZnE ,l6e_9\Got8b_s-<uhQ59Z{4kqoGSƞm+$s&ؖrv N 89z̵"еĖ*sG W52FQzKtrSWbO1CSFG cRKv,8 .VM #enbHeD2_~Hgٵq{&Sn'Y(fc-BZJyٖN,9z&NePs\~vvGLn`u${8\^:k}?#c^ xa#HI 9Ʌ9'"E3L;o{V @,qӈ:8N>p yzs2FAQG9$Z'X=x0`r7|$$@r= @&AXBܢƢ}>i 7F&@0'&\gJY]|o\u3q*Cuy3SR\I) tư=SA~xkg'bLYs'~^g:Ïi>{mLj] i.g,+Unv[u/q !c2Hvg B1b;)0A$P0փDuCj|:6 M&!. /vf9G᪏k7; JpF\Ӽ`QU/ڣir\05C"c>VpbaW&{D | ndOFA ^ 4"Hܥ(=~LӤwg18|#K$BV7;LɎG G&E 㤫3l@kpY0L@9ƻ} 1U=N)C]#iGD%Qi.=;`"D1Gġ+`ڬQ)6Y7){x$c1™lPLTȴ"c2SL%D׷3WIr]btbOX#&J,F))J.wJ`-˫ >Hka '=6 gP:ϩSQ>YO2MY)xi6J3JJe`skHd"tJ L`hR\S];bch7d9~]k |IuU`G$֎ĄPZ!:~e#Br8nR  9@^"p@F ΉPn-x)N¯ CF*0)|9S:"vģ(ոЎa5Sh"cΝs_m\p-bPb"WtQd;#){|J xkp^(J6jݪ2yn54tJz*Z;TL02lL!S1xL!Sq1xcc B<)c B'BKŸr {; nLj+n0_?L'M 2Bn%ODr#W9M$/p7:?Ϧ.>'Q+:Q?A~ȴkH6gGew>}M*Qe$̄`d4Er0G2:ÜHD`@dLh*#3jc"J(B ʐ u1ׂ[ Bx>=@gz&\~Ix{#0@EWǥ\ )͢[_>t7ӫˎmLo\%h$s .bI GIX I$D_9$0a826CA?xe&@YȮN͘"y$e[ _ C#8CɒAАU_UW=B3+TyHሔzP)2fh%188$d~= ΍Bt<Fc0&/4sIe +O12BVxRVg둉~_KSCDb=S|,S^Z C%R !6A>ҽR yqW^4i#L8ߕR޽Đ3!Vtg̸gnʲ0#cD5;Muvdw L?y}6O\O))z(Fj0e. "dKhrr{# $F"қ;)(]lssC.-Tn{s\9qIvv"60Zȋ$gEஓIVŴ=_Z[/')YJ|8F&14z 4VЦރw,\q ƘNʏ'!H\>xj~*<-^2<RTe[IU%ǫy5pЍ3Zb%wtY oFfֺѸO ƣRгuIj[%VkfaԪYpK sP&j撱O8=˭~~)AeuhKeQ81{Po]:{ݫ|滋^_`.vv}.)cu*vvP-:pؼij[6Mk4r 7iWw9vwOI_[Zr <}9@u}6cXwgIf[ zd tד,1Wv\…(rf+?Rbwx]fUٯ娯U#H1UQR,[{?j:+^ @% E`,Bx8q M3Hv)9487> ŌkBaa}Kqџp6FmC C li~.7͢\whQ3?X%bsPRAQ 4vT+=œ&̈<4Tg" e>q0 NV4pBNMyc~UE׾ogՍk]MȅW]]MBL'?-THR( &(..>ba82Y2 (mTi0L+R*"HEi"훳_ R/ NY4~7.-ڊ6V-=.4.ه%FW҂b) .y3@c:lTB$N$pf쫉򥳃}B\+uvy >U0_f&iK?uҕj1U-J@|y۶g@c+M8BӚ6fp j6yw'Sm5wsǝLkr3icm|YMX㢇MV$ѱD{筦@XYItidEdUkY!"BjM^Rc3p7TAu!-lDmŚ4R`'k,O0̢+2 ɵ;BPDБ̢oS8\͢4i YVjYcLY% 1GY"Ҫb7Dr~Q>OU9&cVox^~Ժ#H1JN>=-@ad|6R`Tv/_JZ"Kz))ֲ<$5rVh8 _I :]s3^HJX1h9y(A }z""v;Z!b45Nϓ#x'߿<7b?$Te5ag@ OLnqaͅBi6*ćZ ĉ֞sɟ-u/e"u3DR2ds&"T\ZbU,R3!üB5%p;JErӥ՚L[WbV)X^k%?֋^ΞYZƍSXªf M9G>mqk[tJ4u~Dg`3j-#~:\imwZzi奔|pSߥLJ9is{i%@Pi{]݁kMhVa8? ɇVDk$H}x Q q>a;]liq$1$R

o/p/PtA7أݻ@af)z˯ΌYNV}`OE7T.ԽP \ Ďk & > Dv1)P3Cu,kϰC9J)IKH!$p[ksәn\7ނ(ghNv> ̷{ɏ@D5椃_ Y"{R EZ}WLF,8[92;ׅuWUL.mĪ,]E H jٌ[FF$@W4 _zp X\W+s7BT #KnyO1yijw7Ԥx#Gd`2 g< h)L.Қ6Vdh۠7vNpS=NM3E>pRF9 TDU)/ gRIKitfxn|2)+w vރ7lN->݄'%4̌E4 hTRNuh.=A&D#}@(ݰ޳J9L (ǔ{FmD] KP5莠!@TJa8e#?f@3^l/+PF%#)=C5[wԹ:269sRM24U$sico8GÍ`;"l?֮(ˣ>;db[5aYFsJQmVk%(( KY% 8p(k;@"7+%&Y$\iLe*uk $Rco5cNκo(-%NCO&yd|tywE:zãapfZ6`] GZkqY/jc>>CJ;@h fPyˡ.,TX#Rr @)" F YƼj%188`$_DjBZֳc/kʻ0VK3(͜ebk&WbXe63<qɿš/6WM]:T%T&o @[,9<3 $Ì-73 HJk/uI?'jJ49HA:E)F m8VV 2ad{Nك.+u=U$d):ScJ8Ԥf1@Z4.R!VvY9V;//qg*9褼s|V~֥"_WNp):\$?.=$W\|jPua}溨?$q/@`\% dt ~F4ѧ/U>V1310۳7h-q*K#688`@Xr +nfoHAa0'D^gJO[قOh{=ZGb 2csYZ)r:hiزLJ(qrGQ)eFUIxŸo/܏o&\V)|o oʽef'Q2ьK 5xL)6Z[5iiX;MXa0,`(Ňtxp 㵳Ry)f]`=5Q0=#qb(C!fId$2 ? J_z$9~˯y}?o޾{O~y ؁qq/l*~:l,MO-qSS|өuMB=:&A^3#l˜3[ Ʈ?{y<\o%"hcesYFpBbLNX6Rt2D-ʪ[_Mb &A;+"P|ocqͽC#{F'Ҙfy5MRاѷ&y]P) Z33f\r8+:d$umH IӷSNY'f+ˇ(b,&(zfPoɦ dtIB<2`#CLPL2W+S/v 50= yuUGŮrߋ-lqsُ$!Px yr8R_!+y ?/?_&B%*͹Jߗ[{n\罦03`Ô_b 9[ní.;auf*Q$ nS.p!"`M+)XW%E"V$MH|/ZUxSA E>vEY*0P$cP 9oSȷ'*UF7%xhnCsU$_d>T3W W?L8"$M e%YQDyh \іJFq{0 x4J B ;GQ!&T'&Д2)J^{ wɛ~+-[5k{6uO=/|mgꐥs#Z}|^4)Me6C RL $b1n!U^*b(5w7F;h݌|%!P(M7ߠaJr*sz~*sz*sz*sz*sz*sz*sz*sz*sz*sz*sUc]UM]}-6\q 7ߌфԼ0+tQ^8`l$aAL0ж,^3kmz䘒|C-#(:]o]1&]:,pfB`(LrcD<`{FF,:sDT'k0a,C@ )"u Šcziq>GtDST ) c3N6`>OH]V 29bʞEn~B('eMyOqC F`X#i&(Z|_E^bV1k tGy M`[E?ffc&]# l21KI~ťƫHXa 5FbPHA5sN4 ̴t}蓬;8ܶ$^?v֥u)A_kG,Ms4X15aV8tøXX0`e_ڂlyi.ݝ.U.Gs8X|/WPU Ff,G;I%n߆3M#ܚUN{B4{1YOM}-Z ef~0)1_(x05, Xuo~Vv;+Ns4_5KMYe:`qtDfiƋ`O`_df^>]#]ʒ-[a~Jމ$g+?rBgOĿ|k!bG):>zmH,'luWpC9^?n)a]ZJ5 242X=MGT>s:Gag31Z3K=R?[0uc kA^ YhqLMpҭ{x"Zp F>j|ȎoMȝtxk"VQvмjI/TK"w̽VĆ ɍLwߞOficfpRvQ\6|lȗsM`cevZVLs,Oo<)wZkc>_'q3Ǘ7\ -6.ac׹bVcy}? .\!ں<_}6yi>}rH~]IHU`0b͕]zpG~Ť(iMH Z%sAq /̧<ʹF8|iT캢N˭O= P_>E*z^~3+:W"盐v ~Τ I%?ބA2XhG[&߆M. ӼFԗe`qPE8l\WTLv+ fVWeKl/hs+*A|c*>+xR"h{]jOTTg7(`gK=l`뽮eYOm{pkuy?r',r~ױ:X q=2JpjMr{3FPb["Z2%XU5wGi4y0|`K9@҃Cć hͯKus瓩Z|2A C(áI- Y_S,!rHqZYgTo~Myh{C;@*1 slx@)vAȾM/i P Z0bT,qm~o?1lj9 I #<8 [1B3,'E[ I5ʼ3TzVgF7i>Ćc2܋ L"!Ӑ)cfZ1!๔L(}( Y"Ni%j˙3B#ƹ}ĔI8 A(P3˶8^BO'$UfK4nGhxIlxfc>K03eTmo My5aϐRcdۙ1B#KthMP*<8͢; `xZϼp^qk"E@%xYE &MkxBYHQȘ@\Uk%Ve#4i֙2Q٘ * 1)R( ?{OƑRb݇`I&&AFmdؤleWS)jYTF䨛|UzWtvIJ\r> d$0C8`7Pb|^Qf5,G!0YPW1%BN@(x+< y4@-=Ѝsj l-"纴KPV80Id&D}t+=|%8۱tgabd\ !Z͉`:aR* h9յճ|y/-KW^<)*fZtU4Ve 1ad0hAIsb.e_B9JRC?1 \F p3ccI=ē\x=qhۦ8? DKdo!s9DrN'ZOtrtrUus$LD3*SU: r605xLM u(8Hd"`A0&-07l~]Ž!"9pi%62nu4[mMr=X1Sp % c[w#g1;Ychݡ=xn.ES"(-8d| ,&Ձ&JxPcŧ8!MBVaM X<Żh8 4FY12Dą1&,^ґgfc-.ٳ)Z9Xˡ M X ŋ<]ZQӊ\$°6hBKZOG:`Kq$O^:ښ@(xŋ=f1JI$ϛ V@Łs%*]b 9rA&DLQ!o g)U!] U *@@x)F#-3[ĜRˁ7Pd0'7#ej%*`D/"!x Bœ| V,1JVĥ+a%.jB3MĝL{?:Ҥorvy3-x)d<ͪ㽏B ^tϺny+> ƥ=[i.2R nKMq .lum:.,,%W %l*έ!9x ) L`hR2]`qmx5OGUw`ϋ'9g=XzKm[y vB3Z4~ѕio? 3RSe]#0w/~_9l]*R8~9eղfH=Ѡ۳ ! ui8fQ]b&FrcDuRvq6,V;?0/*4QE&6]=Pmµ b)ml F9IMdnCsKjRnfu4t]( KG"{U&Nʛމro8'Nl_q4i&uiC vr7} טGRLe'o^5>D 0;`Yih9z B QuuG<y!:պ e0< L)ŷEkxі6VI*hѨܱ~*ݫ~1E B"mw(%R``ޗէ̟FoA?TmE v@۽!Hۑ{)YהQ_fTn.CW^n?jIP`2;fKdIl;UmҬ-k汜g!OfX3 _S`cmrU$Am*6,5cV^y_;G8jb S`ޚiz&е:gh(vҁZ"L[mFg),`ST} mmsغ#!@m&.Gm$.GGt( -<Ј{8fB)qhTsGË(ޖjZtboLF6 5-l/q Qj3T⁼mG] )Pvs[vv s,%Ί-KTOPXhcSy}G<{m敳`ŒWE8|E,ݴbGk {fQHq$`5Gat s ފYQtQgncW1kj]Xvy)-7)jMb,ւHt0eZ+w> _':\~py֡B3TKZ\ muކ |O[XJ%!BsͣZ BхQ޵,u +؏l^ZV!%d@v-Nqђ6xwRmPw(|?ڕ-). fZ.ڴat;O^V']ò-6<vҙvyW$*pB 8G('ĴcwQ_<؇ gmj *0$WW~Vw-'!VM]ϛ;prٲ!$ĤsrfEZ w=]V0]/{A3LT̟Ѣ*h=PȮBM׆njpr†Q5 m/ȞŚ/<b,!MmDNdx" ?`qU?jtE׫WovT_Z?(az*N%?9nv:wOsM>ez%&Ԁ*XOh|kcTT ȠiX9)bZ'cb=6a۫[m貸x 'LgW zy&4 ƏP ǙsgՎ1oB"_&Xe$>~dtY ->[qϷzx3`jSȨ|?[mƬ:/ԯ,0]jg.$\Ǚoby&`kW/86&VzՍkENaЗv{r n .KbIpOx!Az&,& ~@$x ? >aͬ-˛dTH l>^%6I.ъ$`,KzP+5C.Pl:;a&M_d%f#!pp:14_gN4I rTZ |/XH_&SDk0M 1P̍PBY!`pG*фbaa ڸN Rm`dW~*SSJ[yLP$ۛB+8?@b*Ta]b$O HsT#j5>ioqulRJco98x*9g}@ߪ{C* ՎjHQ?s{7s; @ɏ7jE|wd`[$ L '|r$c[U N(84hn8Z\qz JǝeXqn Aq4'D[c0GcMlDOdf^NOI φՉo)r?6a8Y2%\][ӽ{v֡o_Η^of'c$<_σQefۭ Qo48J|toa}17{GbH1b06ѸOżGë͘)Jݣo=k5,KգVϧq%#y`x3y{jONH/ϛ@)fx `u&uͫ_}=y` ``V97-Vy~~#&pk追[ 4o2(%›kJ.yϸ;G3;- ~~5 T>wz= .Uͮ  \Y]B/1=ޠ%zVݜ[V!ˉjGgVPv:O _]&vH)w;IE' 0,N'RKcMAJϱD^Ew>K:Y[T\ fW'AX3MLq doO+_rjg9H_x7WlXmCU$0lgY`sb 2Nx^%zY\ݶ/;6~[F%ꔒSN~yT湜mZ8yZAI%-}Ь Qu׫?R?V]t_*I.^څ,t`eGx˳Y*PT.5 䢶I.Q1BW7 qYzEwV@6,lXFY ˖bۇ Ը*qWW ㏬vL6`N%<2Et#.#! |o@X!SjIʒ3C8$%ER,V8~MU9UQE'( DKԒGE֋Vq#4a#1^ wE%K*21n;t0&d9YscGu857l!ޗ"@sa:;]޻ɦ@ϓ^С!s7l\1W ~]k7ǎ ZOH(y'6 J񤋯6AC EtlǢ܂3ֻ n>a~/[ m .sZD)3l+QV8dH69kE&HE!zN\{o6F]z?h8M(rn.ՑaYqGkf-PWKf/ϝa*M _ːMg ~bڐ%te(1K[-R,6Lrwݚ^}n~/>ۢ|1}Ϣ=ͯn~{kz94w`Rq8+]ZM ]yD/Kx[`k{b˷Jũ1n*ِ ֱxu敟Tg䲍SFpkTΔy%(elcKl%| k9&0 Q(sVA-ma`[0 ^d@tth}hX\Jgv/oF/ n/`9]w<<γf!JGD2\'B\9p?Y6P_*왢kOןT_oÛ7E'BDe"'d_rkR8eYo>¾gM$i'?P'Bz\ <2]\)Na>HxL$#*RT8$e͟Lə{dS<`$e +;Äve@o3qaڢĭ!> dbZQ! E4fQEEp3-&Z(D0-gO=K94FaB񨕢 # $8œA)L V'foLi{7{}hwG_ɶ6E)N PG}]ޥaiYNt?rwWrnw?5*Yh/Չ&fs-y'3nQ@;΁~W/.o=}YӸ~\\xG/Չa/}VbFduxh;u9/Fl TiC'V;_s:5վ ccUrc03kY7`"{&CD{fhpP%J7bi1g;JaظV)Ȣw5L-s%9 -T-( =yA_3wEh%4sfv)sYwڠW{VN)^X) CNo=|յCb2DV&I KāaR{Ҵ2(7S޺3DU^i{ M'NPifNht諞{MK:ʫBQawے7#o! srF|. 3?$<90uK=sQhgrw";k.V+R3EyZFM0&;7%n]?Je4sb:?g`\5*Xb+t@HmojRz\TkbOF>W X)? w Ugxޥ|~ tෛ/x >e+1K->Y_?ɲo$ |X8^w{?᱙e= CK@>|soSk{kWK6.ͶCZ{BZZ#BF 9 Ƙcxxݏ-.VgGH(򹖜N)bxЄށ*Y /)e0Vuj-ZG-#Q [i&R1NGu֋\qp\hvXJg9s˭SbePzUǂu6vt 8$*8ʝ7\ LŘ8P=t:FY^1@w8c{9$yOztZ@[wϓ!<{~(HYA=A=BUח_0 d~bԳ|Mݻq;3NbhN?)(νF:qQܱ;J=]򳎁S)2qx7E@d#Sx;I$5TPEf"f"lǷdw6Χ9t C,hqN59折[EfGQaw5o0g%Wv¸x>^;V.0}6Cݰ`_PKԂ 2{%a|oi|@:r=Η-˱f~jָɪdutm\YcgZȒX?%c >~kthR-L8@NS3gǟ?d&?RŬ1ES1Ù}e)[ Z7m=c'XH4D0ktr!3+ۿ?a: }&*a UiC(X@*,.Eiw=[ϝ6٠:yWbQR2{UYgw} ,G??c?;qp{ 3.]x~ټijڛ4Mۤi ߤ]]>M-lMR(rs?F.e 7f1U#]q/\>@AͯJΧ&75b镻ȧt3#<2+lar~fJ~M!IFw|$aIFϐ"Nr/C%vT0lQȄ PG^0YR}yJR3NZ#X%sPXxO!b&4R@[rݡN+:M"O54&stGI㬞{S{Q]֞թB}'ʍ>V NVKpUq2W,/1j44(p(|Ip_AJOd037C% D5tg `䀞 u iPPm)l=GJ++՚YHP3YV $!tۆβj;b@a7@}-ߍ6ׇF;֍?߻VYwg% bӅkc%~ =;K09-Xr/mƢ2˰kT VQ )BJ0k6v*BȞ+A8@ayƫGZ#P Ϙ(<-^ vjم`n[ x|!xk@KG U{ +m#I`'<"/4=h X,нmyJiJ-ʇ<ūDXTeŮ"*⋍L3k^48RDZFAS75;LqU{Bk vVV]zc+F"AQOЭfUlŦoblQΗY႐q! ºik30!+8 3k {s4址*S4(hg#r:k'=ivmTv@C\)rOKYHC! ^pG\ߒB[kPX>kW坠7 a2&Lk5C7^|[7RDS#U~ʏT*?RG$lU~ʏTſ#U~ʏT*?RGH#U~ʏT*?RGH#U~ʏT*?RGKG_b|1@hTSb*_L勩hQ;^)4%ҔTJSRiJ*MIzzBkj+MI|5 ViJ*MI)4%ҔTJSRiJހic]qa4ng\-Zg b?6S""L&r @Wmij [~^[Auo^/vgE?PBLZ9l Vu$b'T6>?^6&8mWI !{/)cDsfZg*]#'v0^b{S{X第B:(3wrP१e'qf$sxt 3~`3ɪ<2:#8j4E%r!ΆEVx_>+ kf !Pj:2ކ.ˋ:бO_nQ>N hf%V"اf ^ #U#>!GcZJǻo@UИLպ+MJ5p p U+TPDQӐ1(FU-O)IaA5 TI!TA ȝBսy8j,]h;jdiD2ڞCܘ33Q)@Z+f-+Sslyw[ԥAK^m7*Ub4@Exԟ&ze|/o,)&Id)\pe6DA JjF[5,s<A{,!(5PG6!덮zcK9p{VzpxhټߴyGُ;5VO.':RϣKn؁sC"-Hγ$SuN옺pL}0y K5Qyr] {<1,J0!i"j+̸UOΈ w2/X6_e6Sl(. ˻Ofք)AIJy"qn^3MjBăe2{UksI3'brcZs. {̙ C^9;vxn^㍫W8o {_ .}7 a'w|]‚¦$H)%,ӖzsX6Bp%>Ga- P-5Oˏʁ-D_nWY|;aA^i6y㹙^:%jaGL$3DeY[3s &:IaMDTՖMLV2-& PҰ,va*SeH)&O6emҮW#B,UǪ!4hր5Ifx.JiLg9h4}.tlEвjZմiԴְ=(EުRtҧfUE_7/x#n IyF{ i>)$Oډ~}E\C&Mu{r`O 2.Bi>,_t­&zk;o㽿s\Y ~={wM2NEslwy0M՛ԛx  Yj-OqX,Ɉ_Q-1kF41'7_O6\9TޔMN&y 49kxoX] ω)%%f43,qhj'7l.b*uD褢עv19zó QT 7Sގ'F|'<ɖ;ﺏn KI/`ϒJn\e-+O"{z֙4[}B淢/o#JL2bւ2&`wZ4-\îkْC0nw_|Ȭ@XV,R2DsL,Y*vRF.sQ 9iyL@DwY&7:i|.N?۲$KJ]zɢ3bbi;p?í(8O^QR.gm-a]Z͜/4r^XT"/o%tPCsT?ydR,XM9(Ōr w@BnoT Y`**$hryG[/HΚQS Jr&3!f㢪7)PFu(~W3땕\6FS.rFz5s">""- f}{YXȨ+&p\><ތ=jBFEr+x,$uAIQkHJe@{$iF(i!Ӵ| ADgGёn1"]4zK^hd@]DDL)qp᮷ o|[ZMfID "=B)\`EUJ:] cL;-dymDt::dzt%d֡֡:ZRPjjBRBbZRPjJCu(֡!8O^Pj&b^QcfU򚱼^R}ܖu T+A<]OZac(w?b֠ZRkPj ֠Ll-DjjJͷͥ{թ~]D/? O= 3",3A wx2A՞Xy%)XܕQ%]Lً/}:yne*`^pmH¢JgUZHW2TAfݶJvb! 4 иp 3x!dj4>jr4B!Άea&'ej ʬ-#W~J&ԭ"JZ #, &pqd()BP1K 72 g"Ai1վp6QˏwM9yl?M5^ ]]]>|lDñ;(.km#9ed<8^>lĻ ), TϐIqHJV+q==U_UW.g{44HGOװN\G`TsOV-==:T7דo)<_ {2s7 Aj0t=ނ'ikO\鲭ލ, {P> Y̫p2]déL^)!zmyVz8obę߆#jl6^3R ] (VniTL44+߻q#~û>yۿ7f`}Rm‚7P~wZzѿ=oƝ$Nqǚz%cۉ0ɽvQyf:$>RIzhZ}c7<^9lOJ9ub"\ʔ?)Cp"˦N':m? 4t&KctGھ[YL9rkC̥5/&II.ގ덭~{^׍GBO(T 5 䢶I.Q1R=9}Tc:zұ'?YkLAp/.@'g\jKe@I*D19'R΁'c:dӉ!|+4x~K~z~`} Q|A'bXhuq,@HkP|o訵S!.\ $zuVt t$P? A(czfl 8 7x O's X:,R?sV>ƇFcpMv8QA.Nv/ {zK/O'Ip4Xjģe1j$*- +QFccq*PwQ)%RdΆŞLC)39)2COL (6kj J1* fM1xFzqlM _qvǰҌanWjH,Wws1UtdvHoq"&*XkB,c)+4{U@K HXn2wI٢(ibx˖4b ˫YBK57oly`h5KqeF1`}ؐP66O9WŹyQi/sN޵V|?Y\l :J V5gF_^[t;VBu8 q(CCtR=UmWG^]]vuDUQTmOI b,4V)sdYN8lrp%Z 6n&a8/':Lk\. `b ^= ^DR;ǫa0n@j>χ~xuI,&RlT(FZE[dzPUOb`DqeP V|?DV4,`)yx-UB% hjfd%X{gN*8p&\bbrx XbwV>i5:IQ#5pZG~>?|%TOة^Iʣ]_NpSǂiϩ"mbAƩLI-jﱶ6a}]wSk|l{ŠjԻv~.~V`,EsBJsb~ݦ-w!vhXb#VG#$KS%JQ*d yl6=sG k'+51z80C)B ֽSN)mh|ۯmv.:r!C;`/|PXX|>G׭|"O4[ƷfzNcmR$v悷y$4J@lά`g`h\'a'C$k&[I3][z#w Ҋ2D:Rx$u;i6RTog\||--p<cJќN*s+8:$F(V9Sd3;A,!Z؉šh׎5Q~u%o9\ IJ_~]?  `%#s1K$EMDnd1at6`!,v/HÒzSa1%gQ2j/a$[v5r/Rt 13P$aAq"QFI{+f:ϻбeѴiEi'a{P^ۙAp&6Q[yovo+ -0 (}0WlKjR>Ԃ q#!&%dl Gmgq v{|hZGy+0io4S5?fᮓooMrI:_}_'g͵l~}ϯ>d[;[ aGiaHCQ>c @<0͝ ,z` uXb&HNDsTFtxtGxjz\(_X_gI%0;)XX baY,,e,nr 0K$aG)D NY!XF0318E|ި9YX'_}[!T#+&ߤpb‰Y81 'fe-ENJ‰Y4hZ,pb~ # 'fɱxY9όBKYh) -e+zMћBKYDR,RZʢRZBKYh) -e,RZBKYh) n*r:8,HW2rƝ R KD+0*`uZhaoy:A+t&-B!-$%~T{Hf/%3?"8AP\c!nDƆ`SsU,~}D.赂4cҗ?fN!-;g-Ӛ `1$1 1AbX'-Bi`ߵ X-a:3?r|~mU׀[ ܗϰ"XZ6Lrmừ%^}n~| ?ZVץ$m;~loxjOh_m._ `yo-jBې.ii)I>G>"!$sdtO.pXgq2Jp䵦VDCt RtQVR`UU[aՊ4|jYJ : gpjZ24-p_ wAxמ[]1pFOvω07,E Gg jW)_z 9+lxpHDwV#c)G@';e5˟~>]ǿ i?Z~7z|\Y7QwSIJlDdg) 7@Ɗ)/-s~ηW=u:21trq~)ߪ>g{={[q47w+z]*նm_Ckv%2`5L [V274:~mӒ3qFZ:BẓJ fh@\474M'Qk9F1F#zʞt=g^'<=Ct%N𩱰*0dgO"VDm AF DE"*g4|Lh*#=9m|^,v(CeqAw>l8;Or~4Թ}}ԱޅFy$=`ioRZ˻Xw!/ޕ 05,&gB6\ƘFIe@¼>)`M&,& E@  ߿Pk͠$u?{WFd /Ƣ@{0`>Fmbmy}#Xd%2)QRڀ-WDx"pitdNF03,*͸Jb00*ʓH&' 8Wk= :UM_h$3*R _k$ j7""+PK&X)4\aհ#(LPc`#2E*6i`<8Ȕ#P2FE @P`y"a8M$wq!o-5sH:r%{nLSy:c/x-Ћт}hq{gfa&?d>t7:RD1%{ܳs7H}[cGX#6SDT J1uGf4 h4xqu5y TE-6`M@ZTaas)LOo`c}58$( <c pڈ%i{6y_mNNGx8pi"':r*fG $Vi׮ŸװNQ:0*0wOҽx:>_뭺4O~x=>| 6F g|az֌-ճlw:@-tp ChYM,5],,_]XMWXރ0`Ŵ_oځ_+:'ZZ+Akm;%}kmhu$UA|zyҫG!1'qp9_/%/ l^4{?<qǟ?~oǟD}|?o?GXq~"5#*65@Ѐ;UU j*I0_'\ n /VG/IbXb-3 wio;b*S6HGRT]d@WTp8IڼVpyg)Iq"YDU0ňy 3 T~<'rϕ+:Yu*l6*,șovjn{sјјsNQ pU|L8_ֱv?i]׮t#)~J*lPqxk;ӭbh+;,A Y~UzA+ԒGE֋8ioY9-C@}^q>WhK.%m7`}9WE$kKLŮDlX##FGG8Ϩ20.eD洨RbK*_+LH $RCC"`tG)DSraT θ8q;kQ :)󬐈r 8M(rn)֑aYqGkZ@kH!)5g=$B .]h -^ y$#b>&x՗ (D(BBU,ZYJL%0"~ ʾ QBXRKJO\D }"(O9t1BA/l,2R|S<1f!C%b^ay҃Xb}@{m"1)9H9Z+Ve`(MQ,;Aod/wқzt Ui2hcޫbom.ſyFc![`0)ѻu)d^Mu5Yzh#(#CNɾ&t|]^y˚NsHhʹTGoBZW-cnױM'o#UwvNLsۄZR⹻͌9褷[['yYzf %S6x ~9gm,B}wi۔WkV:f.Vh6Z(̰zjIY]+g+t{Y;ɩ}/{Cގǚ}hRahhN \of!"muuaӸK?kd>;MɻrERX SF28C>[2$e\g=W7mMDnR{&h~W+rɺF.8/j1s½¼I%0?ׅ@6wyŗ;闁*5{H c:%{^deT\KkRˆ`7ra#`TM vCiFIEX/]H'MEur L+L*zpvHZЎoom\4>^^oS2x9e7'i.!v՝tSjԤ0Czլ6)egZks8&QuxI*\ߚh7u^sd5hS&Yۧs$sv}55''PA~`4IbfZG_ͨwz?mjM>M5{׹J_f3ws1U]gYU"żMw#gb=^ҫ9.һ_»tZx)Tb&jཽ__ҝ6MPf#!PhE"8[YP=3l\ꄰºB?K ZϣJbFQZM{QZ@Mڬ^e3"U?5 5]9O2@U`1RrBVFLeMSNqVZj U01x%Hx)aI|"Ig(%2gy]M(NJCYut3Ap8ca:BۻSڠ,zbQ]Ц o?ߌfNPIQ`0O艏v9 x$ )0oo;cƌL"^ˈiDh{^9y\m_M| .ǩ&Om,/л$LDcMA,Y`:%ikSF >YN8cb4o$%"(qCvYI(J1,,VJ4qD@^EU}9-Wi]5WifVW$d$IwطndK6;}oLkjWNyS"O%-@Vz3gBɔZE@H1ڂKGm7Q-'93Tn˘9-jH e3IƶPg½iRisUak>f&C)P˯߃ƧhdO6qEB:`'R:G P݈T9|͖';0Q \H☁;03p  D9(!$+d#Tа;ۗENJ=#*$J:pyS/IN q$53Gc9tlɁ@ѴiEiOZM zn #4ZU=Î0Q%abf2:vmzgz! FuZ+ÈLBAP`^h:[H4_;/OOA ·KR >W3h?*yy)n/eowT-Rד̭M#JKw'v~/,TK:x!*Lǖ)zszIk{Qy,9NoijuVXI%-դZ\!E3hfQ̢su%zuȊU⁕b1Tt,^We'-u%57l iLmtzR+A5J9/nަ/U%LR6' 5t|tWhtJ;ּvYR&ܤI]ód&֋ˠLy'oLQ)FLTssv9TX޲*uԈ褠mWI*tz$[@yb(jTN6N2썾<-GGI1h9exE !9F$97ArB j*$.7CM6-$F^tc<{7?Rd9ݨ>+w 4o^ &uDVhQ?{W֜H 6O;RaveevB@.$ H*+ Jw=06*6Tw`/fxKJHo}yKv ܲ#ur)\ІH VwItOZ#ȬٺkZd` c%}sW[y٨öGZ'BZ'@Zac% t2L@*NBekͬ=zkNKqEYêVUt UepTQC@**=BA3ǃ{< 8f̗4!rMap'KOr*p! {SQ]֛aY[Y6JP %&154rg<x`OYeWO) 3"u"suL3߯ kmpPDPS8z `8r;䘫@,5B/5J 1\_ ~){&#`t2H͒3sc1 *[łgѢ>vGv<:{^<{s.nڦl5l=]]K8ɝSnU~;]_4O펞Zռuâ:wI0޽]v>qE+-wC.ꬱgޏ7GV WV}XͶ;V+|۟$wits#"fR6+LCo#x<$E5O b#u&RW-g07 ȋ=d2[>[ϝ +QRP0J"~C ƸII"х(9frËpkk?6mfwՅ'$PɒZ"!H޻(e~3m\2F#Q\Rg"*wQnWM^/{Z|~{ Ů2n|"YKL~+7}m~lj}m~## -)+H=׷ly6V_k%T \\0+2 t({+CTUG$TgI(Ϲ= Yx$D Xm %n0Kp‘`FjA-PV֞ne&Pi8/P፧:c\k\ "8O` y*l+6C.5rx' ܴuۻ=jBvhǙԻM˵}iݾ4f^WALW UFë J ńQ=(CܤkeYL2J Ts*g5Pz'{C)T*(S$8U@ JID@ W@\zTr!(\Xύ$!JʘT&H=gSGFs 1;oMg?TE ]IP3#uym;zWH֏r|sdxMӺcLS;O/7)Zr~%l FnpYG=cs(t4gĈZcHFFeTy\$ *[Po2 AJS ĩv!ă2L,h5T#D[) ?M멳L_'Jb+lvXZy*Gp~,(c: \G/@AH¥@։j! igQJ1jk%qR^rL<9tsV8] ޛ>wWvbܭW4Kaٺl+4vؤ'l욕WFIWAr9T) eP3ݣý[!EA\[Eha^LW{lP^ۯ><ُ'e:vMH.? A)KT#YB$G')D NY!ZN 35$E}Q8Q#(95)͟BS!(t"8bϢz 9r Ρo1:yi_}&E|K,/w̥~!gS H^jHPBf Jr1x; GHeMRBr2R8e8ZQYd 6j1H*TM(JKb/wbRYh B£µ4lHZ1y`[ۗC7ޠ igU31βDRy NBPP#${DM+]B'0i $iA!Pe<*d,2%dB`]H"TT֖'˞z{G[;t٢mm]9vs~omVKp~92̊,盳䛣UrxN7<{dew)?׈4׹5-#Q.:!z| lwϙ[8g̞1Oǘuϡ|uyb╙0Z^?:Hӫı}z֑Z6ߺ\ԟ?|N2.tixsY/Im{IoB9)1$e4w\+B;Z oؑ:k֔z.hCL$]F&;c$:'-BdilE E6|r04p~yGG˧!6b0X6nr#::Լ u:{)|ɯXex,p$(6Pi1:/ uQC@*J6h EG=js2Kmj\I:JSO9<1 99u$.-,%(at\`R 3BDN HQXW%\ DevXsiJgZHg2r F+,(=ri'sj@qݻ'=?6.$ynzJ[P4P5l;q[kʪn4_4_{͡_\Irj6[o8MKBhq|_Q ^=|a5u+l}&/H~<=V\ uz wD=g$~( usQt7ƥ#sxÖ";Δ D!'A,rio '^=p qx"tgD嬿VL׫?K{SvLf[eIbWY p5@.\ ~|wqڳrѪ1 4Ё{)"m_u% QȆ)( Vsx. 84U{o$Z@J8j=!٘+ qt9Q` pnoohO~`q4V˻m0iZ4ⷶF ]]]'*p uWyN}jC.4 )ְҦ<'nro0*4X[Sz0oMkcnnav^ ̞NHI~|oy#`4ìqa}BF9#w úW :*Lh']?y{;۱8EH稌uȮQ;j5D/GK<@8A?{W׍_fc+@>dco`l2`AIIvb-V/هg܏WǯX?'\5'/_Hy__ן柿뛟^FHD/Ì5:wHKvX[߶nMZ9ֺSO6|K#cg*w@Jo>O.X|/y~VkE|YWt_eErdѪ{qUVH8j|0r@i43} /ܼ}>%P7JHF+eK$ZAm ٢)wFҡO2)8x$%knUIĜH)Ke4"5lQH?48]NisQWNi9;?Y_BDԐ'ݟ+$Q:#+T@pXou DiV!٢5؂@:$&i[PΟܟ$%{}3/0R9vHIπ !bEHHzäsA%~F< . CC"w0HD ^z%3p 4i4tZFZ0FSha)TJT>ċֱb_@hE9EHh-Jk,Wuf 3;^Hk|s{+lBﯮ❗r?cE%s89( .-N΅{.0!ª<6I O$%YSNG(hQKPј 4B2c~A M{H̄BUn?4w_uak>@&"K|\CT6!lLcbc&;Ao,w4u7- 6[5wP%odHj_T3 Gܞ.|>)Mlkj`DLVe:$` -G(Φp]-bp>&=Yu 3!TbrD,  Xboi &7ŏdTrY\Scʡdi2p\ f2(&dd~ o$ͽqa7;ݧv 1[5n󷇳sJ 8$:*Κ.in2(SqM9=@ 2?}!+$`4|YVJFb*2XCcng=>u7˪ٵ'_]u%βLkmתoWqPXqݵJZi+w>=9^ 5x\ˊ$uޮ'Nۏ^7Ët'r6? z{o/˸>w\i:n}QʴꏕG{Чb\0s㛎oNoVL!Iw Z߸ax]mnl#֢맙X"&78L/#qc1eq zydL;Kw V{P;UnZԃTjk|7/"i2d)\<)I#N-^^W~q~HMɿB+@;:8@KOǼ{͊&~vl=V &w47 &1i;XRc2qȪ{˚M䆣)L;mҜhN:X5_w\TzųMu剡Aq6:8ip6 ip1ֺpOhM bHN&}O:GUZq6Ru} ª%]. >"w]zpLRB}1횿b?S"9< oOgR!`uyP,`qf;E}襑wUu@HgCW')ggwM<Ţӫ^aյ{YHZ{Iڙg. O# kۆRsNʶCZGBZ;i ѻ>-p(-%8${3^bkCW)ўx>9|ƣ'& Y@ W+1a$4z.{SDJc:Eks24>.lmGr⦹ ?j0RuCҽtvjZǃw΃ z!mWS0$zI FRfpM ,F,"(TBUcX䤄M4StYq+ʒ=7Z) *TJT>ċ}a 6Gw_Irڳl9j%3iZ՚+{3J;yId} .bb*.A9I[Sz}hR9 <3-PhP;O\˾rϢkMmv }V}m)C6`w5T xU R&.Eh(*9\YUR~n~OqjtJL ^?+Y;O=ٶyozGzk=wt RmX"+QF4 kFR_kEa$:It'q(CdL2N *"z9:X %gm{|`fZ07׽X'K#+2K':'Ξ8WwǺ5?Ou7x. sot/Yp:0S}P}$;[k7HE܄vu·-b2c\]`рUs2VkjNphՓЬx .WIt,$+[C(ƒIPB.dq-)9J63F^OGí@v~$%{hOM]@F4#yR9vH πKa(Ikq D@"w(H )tT{у mXۀˤܴwsrʻ9g5N]8{go;u9GN qϿ&)ԋɆʢw=ԢͻoлtC79|p5D̙aF̎?oe0ػHzOQ@NLekrv甇37+?=L^ŁmE(U)XO] $<0 DqhaÅ[~4ߎ)KPL˹ȡ6Kw9]kD;o[޽;pQ A~>L^wa<g%ֲ壍7'.};5c{k47=6Uٕ_ ?L/_fcw|iп-ws$ G_o(~!]= ٓ{uwceg7p'<ًDO_[N+#{]O?DOq30!t aET:x̼"P, .QR+kBQ)Pd}֠ 'Q9z{#tiP~j ^e Ům59R|S x|s%F\ ߡ@0ăD#A#n$a(!#s__x^,nuV^YA/ \(gy( ")AFX(•D6 oxzZK1ȅC#3&O&Z3& VWpvkFx<MSWګկxLtĕ1EYn+jdytW7|>q0 hܠ ~s='pzJ, 7V+߈|! %^j@Y][E.1~\d^F?׮o{m |sj+$dxsD@ ct1*!Cp[*D, Kmg?6Z:=~DWןA{oR=6,dWSI4PQ,E6]4K%ݡ$SL-IQEQmYF*E6pYD Qx͸5_CJ gƋq{JAvW]Kmw}{*I` vFN,Z}}O@[Mίh)Fhw>Z(j{)5̫&LAh4$-,GcBT2:o< H2-Ԑg2+L/;Jp TiXl8{oE)G*/$O-4.Gmz3uBmpV$6O d-QF<"QflJf,I%7$6D1x 2:E]a(rښI4#L~f+Jftp`@peڇD6O[+5ۏ\.&bm0UI*7 ^QnQHbLB$wvDAUU[1k2-E)8IiA "y]j z )qQB:oUMvHM{v-0lƫDŹ|?]. Ίv׬b#+q.*^42lVؔ{wj)pR# Fhy2$*kZdNns\:ݑ. "{E=0 j"hM_ +&zs|꤃ z^}jAoi= tq<ן.7ol}7ZK8fŖUzszs0GMvUA+~=L/arͭwlofZ1";x/-A3bVowM'oN^k^R#ڣA4`'A] qaֻsHsJ%6h>)xkA(D]\oB= _o)f7e8v؛}A~EDgq[+߽O gacϦb /LO/31ߖfm^+vpζ"#!-őVk^'ynkM%p9)˄iFͨ&|OUϨO|?:G?K!^))0"8B@a]¥*KO%71j$u'8j2o3"* DKpփpB('b٧oП7'zjɔ?k!<^.6P2ժ~#'fdzu$=/3g\W8\ 11 Rl} Xx5?5Fz!R6Y]@)|NH36^z$&3&:BBCzke5,?e'RQ'!{ "tJTK(Nk sV } 01e4<ľ>1:7kOe{>m{x !3wIFh!GlKl>U W?L9L9L"DDS;a 8ʰ xvzk-!VR;N3 㳲;yYo!XǟMm|hzoAw:J]s2ڽ򳙜02~TnfwTol>̿snW6ZȢ;x4n !{aw4 .^+;.G <֤kM.~ok6uOxWQmn7GzI_N߼ڼm5|;Y].}>&߽y='.V3+{L'DB,lM=_.>"/'1YW/E6Ț 1܌}6:>hlC!LYJWKksř?]^U^0^k A'=O"կV>r!fZ)!3P0s\,S54ps ]?8>/yB2e᣿ & ~-x2?\Y谵'_mᦽ5jq:43q8ZXH|oƘ]cSYGƞLZaSf9M}E)tr͜\uS'ɳt0>&Ac^wxym<̛(1p1:di u- y6쬶>+g#vyoP֞W.>Ϻ NE>-]s0=gӋb?jVVmjv_ in0sg{<Ikml5ZëSkk$]uvs=[3ኛmAc0nGi ##lksz-q8 8yۯ~+woRe߬~}n\BbR-&O$bH\*%?q;ٜD#;wVk ֎ZtI>Է] R)D1!)8gF%Q(dN2IQ6QgƊsIKLH"f`BQ; Q*Ȳu%Y=F`pf^sh2O-]M'#]#mP1OL*!M";P0T0 &0hkChDIJDWWF͚" fsa-8 /D"60@3r̩#%|כ̬>rI%jn]dISAzRyEo=be_{|1 %2?d$ai,!"D-X_*x*`S :$KjX\ZQZŒG0NHyE"g\%2f0hx> ],g>sPbRFuIm ˀohLrrXKJ;L9LjcK@Q.E l|`fZS !uGN<8, 4U^(J:TH2hDcQõ[Ic2AQAE\ZhȤ$wEcQ$gH!ZuF^F V`$Bź=h,#4jTÔJ.#PX56z'&'Jz|"!cq_$Eg͘l %d x*WX>SFXlp`Ar; [@\ܘƎKVcVev HIxR̀j@o]6ǐP?H!g c4Xo"1([E"Q ݄aeK$0&L80)8)6, KT& /QWAVFTtg ƀ(Q.(ʨqƂg3H^_P@x*!cW:`")h $")byIT0 نzFdUAym$Giq"J1RhVUnʰpfcPuk?f؜:i0ϴڐ;qY/;n{1/YbΘP$0_e)** !8BNp?G.èOr?önCnDu5ႻWH]Ј|CĎw!c9MY Gg05RP8p`FJ#SV: ]:%ZLU:X}c5FM0R}XPZ<4D iI(}0R TpZܰ`u'XL7 < ^J t+cx[P4 .c̨Hp* ϣWwIJyn6*w<}X۵7]">;:3dlk}uFX2= Kx1* 7Db&: .e\%"$\. ѫdM`HacqXJL :d aԋA+B[? H(r|d8rO^ SZ(0q˺ %.NI%n|"i07l!kI>%rU(./"c`iVMV^E`$9D%. =&J#טXWϕ{VuuEC@T #m=)Tfիn~=/^ܰX;NUfj)YGAC ?{`y'c%5u)`fGG[$=ۛ]"=U_zGqb/ bOI Q\2OF k {J XiYS}J ,PnS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%7>xUkSQZdǮVlJoQ T6%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS5%PS}J B@0Q\-zJ XiUS} JCv7X^??<:泋9L'*,0 ӺocARNv7?%)H"Zbu;#heXUwzep!y]jTda#~~}kE]m'YMdR>\_L}:}b:_]M9~>t+]U+*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*k*koWUF4)t?'zk6YR?(Ux14KoWy,v ,y )R"i]),Zvon4vyIu~yvXOC5X^m]nwugVqqc4Ev[-"V9fw}NgA۳ku7'xOX+tYB d@2 +{[P4'E Lr/!C!bFv;zI;cZͩNQeWXO/fj/o%}tXtѧyGOƕ.^r>G-}Ww/ ?..?gڇ[zͭՒYDKbq~)i܋zD-_mo$'{6?*|fli@(y4j;nX͜ñ2ASozb@<;qYyt\u1VS$Ib_@IYyy= v]Sb"$%oD)xH)VѲJfEeeD|34T\լ Pc2<+uVNBYŪ'DɸН\w ʇ>]T. Om@W8K=XXH H > ƠbS`+&uB1y ^b\oގ.{˖ײ2廗-cLD?a b*I}q>HovbFxm^d;qm=Mzf3$ !pNp:4:$%Gud_NmN&Jje"4o@1"\ e8V*Ƌ@S360R^Fjzvv߷QUǔ3έI ( WJx$xAcYiqD!ճH~y}H%,}}ե|1^Wz.6:εx73s3 M5׍w텩lO'$1oe@?:5Crq95v +*\įmH3ym6B>\NnU``9Ur_F-?{_hر.m=q #>?¨i%tLѝp<_WO+?6~\> ̞c~S{u=][fvI4E?[,n/ iwbnnCo36˛g`F> *{_ =~g;pֻRlk˳b+ы94Eae088vb㲾;ͤ|:)%0=tnzk؎q?|~sM?A)\U^JO#@g~kIr75|[O=[]kr>r}װΚp,Tn@}xFqp;A5?jizpēkX} vz.wl<%mhD f#9JC3Ooj_eb}&Hz#~C# VXs#׊RKMAJOGv^@.Ϧ2P^*(ce d/q&~S?>6nF>ZopE@Wp,'-ٶ<ʱuGV;lrǕQ{}qP YQY-qV$O'=qHH!>$ Mt;109ׂX(#jj*}fD9bZ3BllDO)A֋n}^ag'!" k }}`o W Dfg\}eA1 4Ѳ1j+Y_шP@P /|Ԋ9f…Jϋ(zRp}ʫOhWse[֑Hz<P9pgZ+C8 ?UX5IlUkp..‚n]}9{z:抭{Ǣ%?=%i$X@jKMLii&gcfS ZkQ[^ӼY]r!r*XE/UIנ˛A.B{ }eW[^NBK"U3 #Ue<OA*vYaa߆7/1(*ib,gz' T-P@YTՊ j!*̈&X_@`LzĿf ]H"Dd!h\aqoTa*(HS0YkP1p'oJ~:(OW^qA[*uuϷ:9O~꓀ E,Mj ȨJ2b*LzPYg:^g6r!X|3"?v3Vh5܇McBnoW.XRR`a!FسXr'jFMtD&lj}H:+G$XΖ޷6[-6EcQn+ey$Rjmh~/U6L ۫|[M띍.1#ѻ VuVJŠgCMC#h `]_;O>-63`,{c7v#mLN{nM^qm/\  ־rߝ||PےGnULˆk=O NY!&kCRTӚ *fhb.rh5d@U;B𵎬V\r}/CelV4 ^du8趉,Z~} 4[Sήm ||k$?~.啲JnAF=kMO0cK-) JH IR΀p0s q$ױuFɘq3J9,̶3^* MfY,|SYmr˖ӋO i_~v~* zÇXb;E1SMgr8kr䓒G}S0z-6 '#5K?(aQFQl2TPQTs22a-tH\lnKHlT)R M;^:PIɦQI|=0ά1L\b{Q";Ѡ_U'] =4 h4>X13M_w }}dԗ6u.|qSC7&tA0nFz.0pzSd%YVI͛lڧR+ OW1?毷d@`8ւu_jVTCe g˜1ǘO,¡-|uyd*Lq`xt xWS˖U{P;JJ n;0"3i2yz6pcIr$垸[Ƿ@oHB4 7EVt| 3]x$fdiaXe1o %4$"!- 4}{oq\Lcn;c5mqܨ{{PiIY#9IA_/E_]䧑D.e\PԸ8frd< YEB$BL +jrƝ  ǥ":j#īHJP5ϑ-0'nx2Kk 6:' =.AG 2V&{wxn}V:j<X !{^MF^^޹*H+HZS+܊&f?/eMn ɲA!;Zu]{]ІHj['3&H e0Z:6TP ZqO/ױb7@y3q @ؠNg۰ȍ>m%MoSH֕/\g,xpar:4< -6FN?J :RVo@t8S,׾q(LunU)r,ZELf䞘 gb zA`SnEWYp+atL`R skȜ [?ڗ}m@ݿpa>6spfL=»O#I& u"Kt@b1PQ]MGg -vu5n]i7-ﳙϼ4r~ͯn,Nm_Vs6gͥ@,\zj4,mχn|5 z}ɮ4_\(p˭G5FQ[ 1q65o\.ϥ-R+׼E*/5oa[aG1 /"Sq Zgxk>p;}}i{Im=`C>}t,%y1RSVS5q$HRH.Y"YߥdU3H\!| ٹ+SW@%'w(VԐ3W@9٭H.W"GWHE\Cqe B?RHAU '/JMJkG\_= *v|q>1TY\E-8j/*rBweH `yX`{^2ݥ=Xߠ[eF]Ɍ~J=C]A]KamteP9+wӐg.NlgOx9ʪN0(ٿ sl(jFvҞ)؏?kښO#OHMW=5NEMWj黚T.2 jiQl騫J3Z'uUR=+my U_S|¿]_ߜ?sܧu';5I7qm;RaR^MA4a@Z0Zo# E|Q%P[q@3wS0\uvnyp<]헟 r]p6u1uI IX3+姳Ġ_~ҨCAU4K_/BX1jL&(WI@}{-0sìH3GxMsVi|N].K/9% <9Y< UD%bgPАcOE_E{_d)5 !N:`D)td}A %Eߢ@(z;C &%tY'k>eQfQOwy%Mɚ?vD}a$0-e & HSPS"J`2m"< l͟fj+?ybF|x233axX>lhV%'sza<ػIH^L$@xVQ& UtzZE'C"[u 3#*f\9BV ArEG G*ib=btV -}&4([ VEP9z]]P !\hcqر[8w JgZK ZdU #dha!'#lIaEz@*c`3Yg2f c()T<29J2hudb56fl'qZHrdkᲩ6Ŭzb,dY|r[N"5(w9 V -ȼ[$V~U_yϬyXs`n>l7Cէ&p/$fOr{ ?.NW8A3[B\1P"IDQ(I.Ch2ƦFn&iY(Jl׵dQ %fH,7NiimFv5xǫIuq~~EV dqg2J3 *)KCʀB`[DmfoG%$E1/LVZ4 +֚95fr]،3^ Cc]]xP]\3cE Ώx^ـ.F_G7&.E* V@5bqCE5xBqJS.PHJ蔔VUmjB@Nr=*`p,"95v%/ZwEkAkv HkǂI%BVzx %FgUJEĶX1y#-dF؊Fʣe:/DŐ@6]NEߠ6#k˦1x)o62&T=(!t:{%$`X#6#gUl%/Ջ^^bU˕I]@(v<ն)k;Nx)U+2e*QYJq}/Utb(jt҇ȻT7y\C; Bj'䵊S4X?#y/Vbb +X^Xgc!_"/P`)/O֝3LKLp|ys9a4xδwb}?9T|aGB/䦗rsZfϐ* 5a|&)_9a9X٦#6O3/K"Ϳu* rr9߃d>!`y<\ ]#W,XPP;x75,b۩[6͗i;<3l۶qUO75~ʷ듖[q) <=i VGƑTU,ʋ F^+$@nX sŁIoG(o'Kî&[|8pO`EB+A&i)9\@Q{Kdx֑8@ΩcE{~90}7og\ \ڨh |{ 蜒Aj?W(<9]:]/tbe HD0Ln Zͨ IbIFFmP $}-}H (}ȤAC  FLVZKT/~oiy-|>B5'N_I1t l݃ . hWj2GH7n5y3Et9YץKу"UWj9%KA%)WK6 z5y"N۵Do0irl#I82cBw;| g$&UVhA}{M ,q 3|ESl>9ݳ2$ '.[O\Vl[E_G'f+ Ց U _ "m ,#&҅} L]`NCRK/:8h^phݓw0sJ@"EOŗ)Tr*)31$(1$٦.A#CۊB߼- ο_͂cA2  9ٳN<$2dtN()$sU ȭu{F(HFӛ`R~Xs}ad}tS:P)t^jPfXn>^3}wAGAQzs$(lRqV&}QFJYla*&$7t(JRpU'D@IB9{Fy9`"orV5#g;їIJoG}s`;fW;Rr4P׸w }+6[wS/z=mPIhPA4P;vhyLI`dɦPs2,[|@HG"ɖ2IfPrA3ƲҌsVC>I}+IfM^QAGɂ:z%t4,7J.DF:-`|CBP†Yp7b8PQJJ^ڒ$^yWb,AeV/)ctW8F}1J ]$d%ç8QuQ(^Ѩx[63_;?L~_/_D??=ip)J 5Vٖv#@@7]:}u57*֦kgަ_]">UlU[n!@~|_'r2'?iku1C: =9 $s& v*kTK_EޥIGAq&tאhUZG,W$%!f )$B8]mGLshP*L@Ȋ 6u^^ϟq ܦ[P1E:f "fB*5,xN +wcsuJTRƏ_RBMe޳⎧O]F]hWԦ.|Ei2$T+bHk}E]u/s-jP3:wsPN *ƩFaʠJ|mA+n+F={F.0{mН X6mu,Q SH%[U!MsѦBcW%୵Ҿ㸻2'wqL!DcR&#ɻjk{NVa $ (phme68_+Kkp{dMimBP"_MP%Qwd mk n?˭&|hߡ",1Rѫ6#ʇ ،ڇ5#g/qcY%zqŻ[\3Vg)?βnb9\0 bfWi,s{TiU`ۏ;5HM')?5eNTJ~TӏQMsDj &IXlbގTnl(` e\mA,'h044("P>9kLv}ݚyM>cmVb>糛3N٨uKMo5y]F,+kG݁e[UŦwa:hy(6möj|;F؜y[9ƶf^BV,(c$r)>&QޢITrzhEC C@0F`U"CQWZ*]]%*:ǨAgy &|U~ϟ_zRGL0Hu&8[P=3R#+d'Y- qk0HÄUE X֌8Vhxc@ A^>CLhY' ϣѸR>j^^{_`A:sXLs1 蘳vM)_U}X'/;Okڹkx_ TAE $Y[K]YDH|e7 \>tMK_!s}YF۬mlOo$o9L>6 n:QMN ń6h.Ɵ˷W79izN߮͢Ax37ov~-ZOWH1y[8˯i؛p9!=QW|W\,|g5$*/OzsMPׇgf]Uid-m@9;;"Plg Ub]ܪ{g/ N\P0vY4a#A24)Oc.*bO\v*p,2뾝wԶǺ\GH h,)&pn&r'=Qu&*11u2YK`itջAΕeNFl]]~TX̵Q*g er+gIv'ӯV}Sm:`suB;A8 Y|Tj#JD!фh;'>S-)[8 ~ǧ Y962sl>/ u[kUlW77ǣ1@GG4$7p_tB] /H{D;J&K7;R';N1@yg- DQ'%"uE9DŽRn&x9IȰ, Ƹ#aREb-5KF1$ꌜ"r8s"dƚ|_3?} | 95k(;Nxr1댜 S6|<ךl*pw%}#xl].VȦGUfxZDRQ+RLL0N` ic Q0!S&aJ;f2ʂEkAƀIx! f{58*LRJmsL^#)6\6!Թ 3Xőp.uXE0XL! "JFc&a jVʭ`<%TC-'h{-<ج^L0%aPZ;:v*>??>F6ߍ׈!^T^FB9. hҁn.m{PG*R!FP"OafXTqLfb $ ҁzm**[Gcu:KD Ig "Űc VDR^X z|datܠAX&Fظ,{OoJGx&='ܫ55T@.&K*ҭ^,{ =ǐYaԩ" ^~V%ٻFlUZwMCH$mED+v &n w\ ) ]sO-G2}U)ّc@ژv)"hD J1u¹_Wce{m(\8M( ɥH2=%_úqr8( .Oӵah@6v ?uqajGԯÃ\K Az0WnQ0w }FK>ƌj:TO3fxb?뭺_+ /ǧG]bf͗ {'R}3D`8r\8.n? 1ҙ_ރ 8T]e 3X~ٌwяO-PaTu9]u9yɼ ħi^6GZ(Tx۫s٣E T q/\  M^U5UM3Ts_C"AL5őA}UwlOѪ?6Yw Qw$%!2=C8ɽy#9et4(EE&\ dE:'w&Wuw!n<#u(I݀q"YDU3ň"fB*5`PY9){\wP ˋ win4d㉹640ximciGor:"nz4oWuK/'0R~Z^a33 ؖ U20JK)FDoAst\l+CKjvQ+=["'XAfcI S+DޚX2L<lRRW@!&"u$]!9IUxWKG #5F n6x > 9seXï<k1IV DjmZ2-cFi:"JJӵOSo@,f!ou`\EZعzu οy!Fcx.Ɲh%`ݭ^UJ uֽ%7 )ad?b,<麰v]X FiUՁQQ "XG:CIo"#5NKEj?F,`R,"&R/5eDDL ` Xy)&l?f#gͳ5'k v$hQ%c{ J|i.ǧ͘9>v |aYbb*1 Fs$8[E1s& Uh 6 wFm6*$;A 8&w\/ W ǩk\0]tO4]WtxI$e~ϲi5v5qhLn"tЩ82c6R-dJ왑Y!cʷPlC)7M_)U;X`:LG z0k5豉`R9`%Wy+Wa1(_R$oOҫϿ)I0%k lөQI\sbaR\46e]z)6#`gQ IMR2 f|0arroD#;7oE4`ņQi{Sޖl|y#f u$@[xr\wo-e{;a5a%wJk!Ŏbe^ATytB "}(JIy Z-BBJkbGm 8^U j0DȘ5 R^(<g< uf,T>)N)J6L;Ko)fuCŋ (M?UWVԱ|+*Kq K.JXE ޘQ`zD.fFː? ֦TH'K! 00|IY.rGl;r[P;wl j; v+<Yy.E%#%q))I9J#2y3"88f!exю Xr1'XG M#l܏S?M-?E~"v4'F(~I}!L`j@$"xRu ;Pq8uZUݷ|"& .! 4{8jPkd(q/2DA F")dRaK ʃF|iuUEA&q)\?I[i?0} js }FS_ *c"7W/d9NGfNا߳k9Tj\'Vg ^ Ėi*޽ VjϻZf:f+Ɋ)=@n䶀wDeՔ%9su׃-[-Z0bmںHK(e;h9䲶;\ `vry˳$t\`uBBQ^W{P;IGԃTjng'mK2{ PVCwmnܦ/~KO B+|y +Z=C׊ OV}qU'3; Ts~=VV)L6PródQ=ACӛwLQ)U&MoWL4Q-;yMeatcMQՊgʎYQUwt›i$.lP"yDH{SfWĻ `iJry$'ynߘā.7M=2c喔pk6m=Vmu-O! B_(Ԟ#!nzv8i1o/ TpW6~ {wFnn'䃩>^FX|c87"X'!ϩd(lF.ϸ?f2Nb7a c2(Pf)@;rV+9rYSG'6{ߴA $SCHIh;t>*|F=4<]o&Apm PA56hw\OGզDs4`&W& 3 |aUDE+`촯G8ݮ:/hJjj <*Z{nZ 9Fj`؝wV'a\~ iB &uD%ԖhQjcU#.Mh_HNXg±zLE^|Ԕϵ Y$[cR;ĬW逰[ojRz\TkXeնs0ii(n 8/}_0|%{3_߆EQ{GʲW߼g0ʕlHorlGpa=詼){錢vzWv 9ҪQWGqRI.sUrb8+5adF`UagW=Qk/TnEW! B\1nč{)3 zfEDSCűa-h!c e1~sVA-m=3{9A s)E:i*5..4<'׳W zTx*8?]gbiƼFLg8g*Uc Hg_2܁2 ZJ_2)di-5aCqVtjgvꖚBz\ <Œa$yXT )cD%#):IY^cB)7j<[֑aY,uA[Jʬ?A4FOfLՊ`LqmgNg3oy31_FP 1ߙXD.a+[+dy2 w `U"J2p䲃7WkJv)9D"'j)vJTΨ\iA;W:݁DԻW@-[WJ+gH0Ճ0\=\_F-Q)Ȗ]걯#5j|`6r?jY'B}&pa8(_[ڤi-`?\)ٯ W5cWII},0nlvdJh=ak/Ha@7y+K@&^J z)Qhޢ^d6a9#sW#pa%j=y߄pz7bo'nulL`-h% Ζ TbQ#+d([fQe2-Cql6  2T +ida&ܫ@'Z aR yQ▱?'9b`\d0tie$XLjAuY>]Lyi2 ZGK!zƌƁZ 1b"iZ+BUeS.ǽL(/IO9YKs?KL˹R]"Pkh.jU{`CKԛ/N<`) hch ֒`K@}$rpO`<^n]ڢ{ !%"PôFxHA8i2,Zk6RP$ւ!D!elLwgzȑ_c~b}ݹf/e,fl#əq%n۲c$]U*ކsX~ Gdu}Y-L\^(61[ApvwiYqnO5 rr>m3ƝSݝAڢj]s*_;0*a*WnwXZ}UIs四(jy0LvRyfHz.p b2ި{ȴpTu{Ds[w7~:V?Cɧ0d{>KP&s3qsik/ntr ]ŌPd6Rn`O/KcSTV h&Y &{ ޓױvJb L3&Z0&;-a|RzC]Ov/!NI9W"LK1)RhǬJ{L:%ELG8<Jo1HǴbxװuF\" xś@S /OМ/ b:=:2Vyht93k9K6p>6g=@ B Q _DՈߣ%! SC"|2H :i-ZeALr2 !%CyQdJ 9x]+ P:i73w`P})PEp>} t {_gp|)gh?8##nTX'{:9UY6xy釲6M(4oPjGPefmw5G}g?κaKYf2xuuJ#`RtpV~<@IHAiW4>$G%L%tF`Jh!-`p3P:U99=*Ο֑4ex~u]OMX6KWB+juR, `ZIXdReɦW]66s2,I:'Fy)+Ø49/Dux^krLbsM(|F $]JHHFV@d~07$?hV& \.+E#,eHK )]*rbDIBWB(^ ћ t\^̮4FǤc)2AFbY`RY56*ј"5y :vD:64{]OaIhEڪ;A3^x/~w`խpkO?5F{pR7Ȝa3\<湜ȝGmkOܕ%osd ĘiB4NkҦH0JLӣcg-};ikbk\ Mzb%Q$UrKɒ/Ky8}1w/O6+.PJ6.%(GWibfMՀZV}\S@V+A{ +92%XOAbQK1RI,?A mt FNJ7θ`  Ǩ6Q 壞"H]Gw@R "A`_a +4 2ҭ|ʘm@7?.ը: ߋ&8>·@~뿚vTX/f[*ۓY'~擥?5t | ۺ%O<-+ prHV׬ƴLc=ezs)|rVkME<~3x}X ( cLH8[ny鹏U#OIҝO.CkD;o޾=_\:0R1ԧ:.rW& z~i׉3^> ZqNn=ȿ,^j~&0z0Zy3Ѹ>-Vp'㟯mw,~=?]= .QxͦgMyj6Ny2YZ_wUBZ[46/.Ol>/?X?}?+Οxjv!l} 0ۻڊV]kC6}>rnQ ~[2;*N~qhؗ}7ڝtj2#UU8՟, 7ˁӲvT;*0ԚTFlmGAU.u'#ѮH/$tJ$șd-Dշ]@#5⩀u3$=aeV} &sSGZ (|"U0g="zWU_9+R&aSMY_]>P|;}7hցwk }|@l^2UCҳ/itxK9iyCWopsSqʦ4w$k|"H_|idf/42hj$X,e) ERz$Z=wbqaP^w^I&Ӵby` *-{+|;g7nԈw>_ C7z`8e4 p1Bd;0PH+DŽ3cPmv#igeX+:whl?*PQ' F|ŔCy UT$x`co9pg5L~ UpG}z9}^nvDRg{6S>/O <$YfUI#C3lkH+5,X$s&3b lAd YC`c(`%f  RTLXC~ ފ^"+a}PuWbQW %X0,exCZp^$J5'ǖĢ}κ2&*! 8/u $Jt=ԿS^`>'Wqr(QLB^25K] aްM~s.s-WWZ 9{AP!%5%dR.ƛTIWf/L.2Z-d W05g)g#O)E2Dш m4):'Fv>v__ٯӴ-V(6O86eX&' b`x>nBtDQvaE:\^RMf+=KӫUBIKBCW)d*JEj^cdc|P.tQ `*Z*-zRQdAJ26D)lޓdm!QmzDWSz1]hɲ}74|h6;lRQVf2(TEȞHfcRdrl]D]򿐔1֤l6ud!ZHN =U[Eþ.5;Ni\M:jjvģIkNJID%cA %ﵕFgU0E~Oc]F3dFEΎmMʂ 2p511.pÖԯ"ba<؛|"*ѯE\7X"vO"ZBb+u`%p`mN@5KdƊ0" Pj= hܫـX'urJ31Eb4bNgdPqNH}m88#YTcu&%g(8ŝydilE)򨍰PYBZcTRd%ϒ`.=M:kUP> [Wz6yPV?6Db?*n|pHRCN>|mS;* -Fmi@@lPBi$/V! ;=V* r̸dV4&|yeh 2& `W4̉ B%#XwH]+" 78Z.Tt.1+8f,}^:j-<lbk%/{`DJO=0}XX1+X6$)TpF$",£X*K1>&"-Hi+Q@,A5‚ttbIOFU#ϲ\jѫa~XUtz6vyƠmo5|fٰ:FWeOBb=cVDAHYÜ˄$8 tT"d! /ZH,O?g2}t6HqwA>0q(@,g0Oҙr ;<:ri31f8NrxOziX! $*hwm@e (fv!mղ$NlaȒ NCCofș(FLFm1?$.C2-h80-nL 5,"KM("Fwh+WƣGL=1Ȭ0sH93xV9 XdJV7oI}5Wt 4SR/rԬb7t''pSmX]I횅3현xh E sީY6&-M 4m7לwFmp;X&lb'" 5tN`p"_-|,\,1y~&r+7'>F٪λR'E89zmLL }(w'6G|ECl6k B hhC"ǔR1 I%V`LdkT"˙[$c$>%ZW80 07ͮb$?`u NR*׾ioy閗1N-CY媗C,o },Ng/|U5j铷~*Rt食`x4 y@bxK{5LO!ײru__~i [jglF%,jHc2$QVFM*z^"g~k/(MC&gI=m''="ϗϳkOl:AպanuyQ5L:?>N;`fO;۶հ?(;WЗu2Yqczp5Z2^vp/x`Yu{>8׶g)WDiWщ U4Y% x+1WQSJ۫=6K1[cV M$rĸYbyxvp5(1BV&76<*+qɀgo~BM-%NoP +v>~gL DJSmu:ߦE }]O? s}gײܔ 󹽡éJ.2r6^h ml[pO!6jG;w}ydiWT+[1*#6RzU;a<>{/mvi12H7sJD*09@*fs)CnK o'\EMNgT]@0DV^;%XB-s"g e^ /.gia>냽XNҾ@[h8\|:_ڢMD3h2EU͌8^hRI zH./?7UU,eV9cM*x K1P UAbNaN(4C;d0z,ș4dJdZIFBeBJt>!JsJ6Kc-1C0Ga27}L1r6CU,b5WZ ,jeQUqύ3#lq3Lbp >Z')@ϣ0J) 8!B5^r ԡQT9PUyTEZ_9J f.8k(8k"8ˁA1 J9i^"DY]9ǘ3$h Q}9BOI,,r2`!8D љ Qrw; B7m-kL}~a u?,^6.}U!r-i 9+Ϲ'+I nnjIgqm]wziDbY -p0X$S߂BK9|CŠ3nf¬cXs6Y/gL?q˶Yh?C{z?%k˭dŔi6\vxj*䩶;ASY DpUD,&a[mβU'-[OeȲu Yg:rPO*dwFJ2m5W)J(y^zCV.wVe,;YdFtt7NX bJ2#-g`hOxᓍllYvd zx}Z]p"z!PiX-g:V CѼJ_DpX0iB>1'rnR SLYaM>;gEtXT>ZjƨFF$cx!1I*˒^2a9 HYKj7`Z; Xҳo FJw*UkA;pO@6$&r{)I '&*qmRo0)v腿H,h1oC +DIeeO 08Sx˞ߺ]y"/j=^II:DK 4ܫgD2Ted52n |ޚco,,h}-3\*~")$Y>@X|qutٓtC̺mQW.K]>$Q2GR@4X$5ūx"aH-p=qljzNS?Mn[_X }bdIL`h,[ QC$Lɛ$ 2^)5@PD<'@⏒G&Ԭ =' S`6zA!HYr,` o*kT,䳨XPjR>+yj^]4{k/{%q1-G~" qA<FGqV4;SpdcOm~}⌶4p9M[w&KD[~ cj:)j`rGT0t'1۟<H}o(?}?w?|x;qD+0NXE<_F}wۛZfCK t59#>l7ηZ|w};#%>~kN=$Pwԉ.jz*P T UK f4#A±Oھ&[GOw{Y}Fp#D(:2ŷmH&d]̠c@bSdކΫjrg8E#:R e eȌJbdAzO)sNH&@?fA 4we:%G0pvA#E<%u2j[帥+$ "q/R& HS@` *hP`U2ӞvA>NkP=)k5eG-$hlspEV'g #}}ʁԍ⤝39[T)EX#9L7;s;I΢?Eڡ4v㨢M Esif7揝bc$u/Dž}FѺ"^_56NWZYgg3=D!}6HDz9HI*j dAE%9AKhL0oY6R ډW#-2=X#ZgX %=7 Q ,{?>A)+dZVi[A RJs& +9-jB Ɉ1nd.n X.< Ppm2]Ny!GqnqY&e+DʀD>M X0J=pANcA+2?VeCR[:Z5V+[9i|@f$ο>*@ %grjl!i]zĬLoᝢė;M/=l8*jP쒃|7{o;o%~G3wـ=S0 ˦{Is-KZ֞F كKnc&?7Wv^`:;0mV 9_bp`*nQ *&%htXf^q( \> a \t |h3N "y+td@u 0Xޔ *A|a=xg<{dz ,mRK|Ql3M{ R@Cq"ȭ^~;l|H[Q ٻ6r$Wx)/G̩w{[e7&6$ ɦIH[vGY(EJ$%tG,UH<dQH->S{i%5p@"y:r,0P -Y[H6 Faҥ}vmⰌEE%Y<]FD4((t6zK̘Q$࠹~D#`>1hFNߟmV]G=FaPL6$,Zb%AA;Ų)(FNFmڨaY `K-CG P!EjGvᰁ԰GY/2Fނ.tyoƍG 3D Ιq`dt4:fyW k=z^,rLQړr,W]\,WmRߦ6W鬾M#'D{,'nS=y'8~f}g6).p1TxA}}Duf6x;L=?!nhߩKH1jX~}w}o[-N2lU֎ ] H)d^[x-Hp*ҍ'P'_v*ZƮ_4ƿCx㘱WY/ٌf) !ҲNAquRЁwssWجiO3׽'!oщ;[$%֫ɖ.X;fWwu{s?OɅB)&E~hr}f%+|@*06 w%C]?f_f@XlϞ ԻqXYz¦QuctzU96ok+\ہZ[JN-0|2nδV׭~~\MݏQW9ޮHuFpm6&Kx30#\Iu_˖5/rNSļ) [$3>G-)JC5Uu9CnmdF"̃ &N P\y9'[(e}~ۺ\徳I/u:cw35y&1_Ro/ W8[Ijʝսϗ_.T*d21؁Ѧ+ҷ]cЍ6ڨH6hX̉3 %7Fch)˘ ]LG%2QF#+BA7@Z ,jO+q/3? k/|In}K.F1_b ŸN1xNSLpnws|Y3 +K^eNy% t>R#vSr,FN'L)D Ҹ$:2 U:ZQ RsobA9]o讄q[U Wg_.M L@1 I)kpfQ1ͤV6s sZ6q q' FEo!m$UW9f3QC.B)Řd 87mrn (J># (i0&<ʖ%x xzAb)9hm>l:K)\7-Su3LJ>% Y .Fr"89N+j<0 cpd>3ȝ$ RS$ 4|Yēc* %rr2nD(s:pM# Tx&/"  #؎ 19j:+١tUZpqs,ɱHj̖W?/CP%u'Ǿ[,sӪbUi~~*28UQQrnlxL:P娭 gG! XwM\[s*O,_K5]}͙Bk9W3TmdfadUaa58 ]e, M%Œͬ/6/ont~=x/$2Bº"YF $]SLT"f-$czj%J^rЕYYTR < &CRؔD ACZE)cd"WN>d}9;gbyXP5n vo}7PJC٢a>g (b$Qp杓+!Hr1J6>8PL̐EQ&CXE-Qq$|E sWAeG"`u_CĆC[DKdIxe,4B,Ƿd$PBXku:\' h@3C$6F&K< (B9牺:..6UǴYKE^yņ{Q+\iϴDlVLCSЎ3(\N eq}]z[F/ WҍLC=Tc3moEpROܭVMfg&mwĮ;LP:aꓒ??~rmR﨎 O`?@I唽 bzPr0ܬrDP~ YS\OT'=0 rҘ:4h-.i% = |6ԟ1,VUJ 3`=Ȣ+ك.=3'jIls˜@m>Y|G7/.F1"B? -',lo==bD .qD溠PeIzYm2~擛Nvh!P&сd9 f-q$G :MIr4I&$9$:#u<`MQFUUmMIr4I&(fMզIr4$9$GhMIr4I&$9$Gh%Q5uBid=b.SI(*;b-mM. @Sj_MA)5 PSj BMA)MA)5 PSj BMAvSj BMA)5Dl BMA)5MA)5 PS:CO?οϺÜ繻pQlb :w tT4lY2C4׎L\v yl`iS^ͧt~s:jnFIY>*FCKamv\@ g<f r  sS=y'8JŠ3VߜxA}}ͨ >lw,gk7BWx%[tQ5~y,f*C6j|S(qu)D.YUY;&wO)d^WLL }(wŸ5MfŽ x]3ІDǔR1 iHK(8FD3FIƢI }2J[VY1#XN8^T~i\ϥ)W=k)xf/b|w0H0È48MTNPD?TjAd *8XT$8՛ZձN~ #IQΦ(UD֖iT:"CDc:g KJG52cYu~7d I/P.l~g0\\ _6{ ߨkQ=BVxAgZ6_AUn*ujT*~Ykc䒔b>D=(Q$H- >szz'e񽮋-rvvh7ӹiۺ]cޕoNc+g>#[}zo7|<=XY Ou/N}w?Ka>/PUUYMȜqakuE{$?I\I'$>:a[x!Az,ZI%D9$J_J~9AuGc;E[yPF`ɣ{ny҆ o$CGDYXu(츧qg+QMLc(GdHVl \1sN pYЉ&/9*t uzM )J@kMh1PPBY!ppGPhDF@y7ΐ5jS1%=Ŧ:)|A"5_8d)>'^p*_z,OQ9?YW'#_iٟ|S'=sǒs߻[)*s$G/7s\OgADi5f@|(8ykLht@z4"GJ2$^ (r~l$Z*yzB'J2Y c5;8{PL3|aZ4}V$RQ7׫X7V?*1@dۑ,lrq{Hc5tykK'p72*Cqg#ZM:;:rz6dUc|a?z6{<ەIb0t9b~O-yᄬkӵ-qpK뚡1bm3=maOṢpp~xГ6#7Y*%uWClu1Fr_c/}906zpp?^\Tbbp3B:w??'|sP:+]e[@^;i%ilo4|Ew=k5>`>Pʙ[c.?B^/ҿU+?yfDWA(+bٟ~~ _bh!/L<^T P ExnNy ;/ny?eK07JHr'9RDSDғv&.*ϢLd|GVts=۰༲2/MqhxYFDnr$zؙ{ 4)C% *SDw:,4*lm}bh;mrۥG,-WF,ݹ^&~q z{w?({׍/8֑A?Si%D.j[:"*֣]VW4Ӷ5& sT! S-(ˑJX2J$#.Zu{0 !&m}~$|+aktCGx2aIJۭ[;8vFhϤ\P8A@7Z[Jg-I¿qI]$>$҇D`t109 Vi"(ţ&:9f^jVIĄʘIBL&q'ɹxp`>QP9",]68OK,;^{Kd[kA/ɽ zx I!)vS/[yM[KD*9uHxtKmu &4(e)eW+ pIIF1$U neVyqxmUyL'U)$5L8\ @EfIW%b}4&(cF»eF 6ѕ_j ]!\~`'B_RQҕ$܀j]e2\hi:]I"++ yΣg G9=K G4^[n8S{i؋z]Wi$""^ER*ՔWф5W%Wȫ@`Ϊ>BUuUkZ#szIrT;z=teU,3ɖt8؉2ZI ꫡ+᫗Gط߀X}igyh9 ]=h$|]Ɏ}DZ["v Ҽ+vJK,J0{a{$1aS㷓op:;5<Ǘ3(J`;P!|{Ft,Ykh:}HgN%h549[*5tJh ]etQ^#]ZDW +;D72ʕ討^]i&>`З_go;dr5剀%!a Z3ѲƋӌwl^0i| XUkZ3'r!ƯdM t%9%R2`Ӟe D2ZxuQ2++Et++hk2ZnNWru>CWBBhEUFXGWѕnDk*=RNte#6%3zt%`&W󫧂%p8*\-e:^.\>~sת5Ԟ; &*_aNQ>4\Coᓗ1(J*`.>j.6i[P5nϥ ŁȠH͓6!*<̈q]@$3O]MZjB#C+( %@2Y_yRfpĿSG%&~g673vY&Xf jC7KxpuI p$_z|?G|W+y4f4(qn z9zN JSHť'M?FCNf[#1d N'Q~^u-t D͒G?# Z0aKƢL^@X0KXM}K#%bb0G`(Rю c4t}[F}-ʭ6O,$5ϗ'}ډ&DPut֢H\ \PS$%(MGΨڲ0sL!rƇ 3.&zTj5'"t, >S)S IQ` >PG jR Ia;Gb% .FNǡ<|^_jd|TMruO`X_MӪm`M_:3R-#:#,1fhIA0[L? cFBȜPOLg硇N[CJى7MeNLhH0M&@p*:4F(mwI’8DDٮΠ;?o x8_!wԨ{] W]0WJo u/.`m,P[p"(;-Y7\Vev1j횆mٺS Б,D ]e;r4 /om I&e0QHfkhDP“RK J !s0zX'և56 8(9윁E="H@W2P 8J3JpB$ب25Ч뫃)PJ7vͲ$O`Y^xXJ9q|OV\H?G`E+{o\l{ C-)LA=ڳa=7k* fT2J1jN*ߩ=PCr7)[f)I#"V[Bau@MFlKp‘`9ees*d T;7v_Ou42QǸָDpd'F0tRlO7pjVs߻x7ù.^;˛3g[o o\͸O`u|m*%Rk;zc% j Y޹**ɬBnFZѓocpKq|,Б~ܸ @}ff:ҞRD-Wy྾Fog'+m:w'0x@ĉ'}a0Ej(~GTǀ(ɽ e1Z}{~3d40ԦpɃ0YgH]U)S9 RK37p~X\K_(A +HĽpgs:%* !8b)}L;f Y8:yϞuW mmn(.!S&=&W_(tP8䘫@,5Bɒ`=N/Pܦ@ RC R{w*8+m\݅8=k5=7,TȠx+Y2ht'Nwo9&3,nRLS^靜Z8SFMFAQM@ 2:ŅN( Peo'y.4 ۏ2FiYm\D"FQ&]P`Hbl,\ S̕/K2.)8s:߱ xC&lG^8x-j׸,&x"ƘHA'$.p>:0Oz!Azl>tKhlsc+T~t]#|H[&ӻo.1RE4SUR "D6 q8Y!^Kvu[Bhp(NC L88UFOT'$䨴^.^DU4m3tj-a F J(+ ZDSFdk"ԏKTv"i=0EcJ=æ:)|ABNY 8jo 4}1δ΄>:tG+gE泮W8-Yzs({O$??Bߪ;p^O',8ɻ$ǹo48o"Zt.5^!H3 sO(w=6Ů Q@v1F ]3%9ٺnt{T/TZh*yDg#1 (OOˉ7 ix5(gO˱o]?/6 Ҕh^ۏ:9Z~Tc(CsFߪ6Q(iXÖ]?q8#NOhTypOmx>,6[T?yzv~q=HI~hxv>[nvesd050Q;u.a̪|Ix`żOƣ@wm/5StVjPC:G9>L7u4ȰqL`2?Vk*+~xuӟ?o~)e}<\q#0>Ǜu*ϗ _ 훖TޢiaE^Md;}bHXC\r _?Okn4gѱUs*C]&+(6NeohC@H/C!}z#4G}5/Ơ}q,!6qHa΁Kf^ 'D]TGG8Ut^{ayU3{< ٍ+GLDШDeiWISA:$ }:E|:u[1F7;nvJLgCЃ!OЪ>m | *=WBJ*Tu@D墶 @8s( th_gMArR/ ! 3-Hc 6IeH6&G\EyR}8[dH-_|^Fspx8}]"ڡpsY.TJK[ڹ8mQ!bhx5SWvれFcq^\@Y1yY*𧓞80=%z %үEkS"jt91pxTٻFr#W}7F#vc)RCӽ}ER,-A)a VB/|RȨH$2HDA*%R"\K@UY !1&d=5ϴ3L=D7P@hv>x7y}@ѓkN} ߲%}l畨C).[5x6y /$7o40x {,^~Wo'Tm,T.js*նZ8j VmTT\;!" 'W'U֘c+"opM!8UߦaI{?݋_}w .wܫ 3EB $ӭrwp?=a[9W".?_).ioChc'EcJd<˘Q$Ԇӂ d҂Xiүj-Oꄇ,'\Itz} K0:&Ͼk:~%nv G?'0_fCG%PW,+¯Cx-j$@Hi MIؠ&6 &҆i MI4 &$@Hi MI4 &$@Hi LI4 &R^0AHnMI4 &$@;Z#%m 5 &$@Hi h&)%d̋ܞJ9Y6ҷ4ɼX)%$sa7ŢXbQS,jE-)5ŢXJS,jEM)b;8ZhE-E6ŢXbQS,jEM)5Ţo$rDu&Ng );㒚rPպV+:J1.\y1q<{A7hCQІ4O!sh"ʔ1iA5GPuAs[~}BB ˆyW" _y}d\Wc^w{' %|4JIc:żr႖ut=+OMtQt aI9IF?~#G'bS-lO?9Gle1UI[U mb t.XmJU'&"R>&yԖ 22,9sleY͜7fޅz6!r].oVF|JC\عϋ=/n/wȠIj@3#DFx<$q %(J61W(7XDeCp028l }6 HfLMegy<舖gK-N[21YE&9:@c-yݺP²"f@QOp[b+&$+yz'lhM$@A׷E_ȅ5ڞtw_?iUQG4EQnvIQ栌bT%!koYQ{h=@uĕE{0"`bR0Hhr&)E;D)\&9hE5RQwx,Xh>+ZTm0じ~;mZ-uCq\d/\頄W"jf+2mj2g?b$nǂFDZmo,D5MdY0Cv9GF\%yW7k4K]F ! O&&%f"☊38ȩN>f~. VCV 춂_tկ{wO 0e Jя*sB:d0s2:\"S}n +%o:PXGZ`Pfm<κR.umE,[W |gP l )Z;M5\sfw'F_.ϓVO^ϟ*Ŀ;$4=Zg(pH&l]B tF9`Q':V' $S%9?Gs(S#c|w:\߮۝|Oxv淥\޷z*۞= d2 ysϑK &-Edٗ:SNghq, UCJPj0 BWh@qáD): iGTc򑥱t.Cqh%llG@ 1]-I91p祹AwnfUy"ڭ~eP.#,3f䔈њ M0S026`ѵ {.HZ9`hSxEl3= yxb5l:r}5&3#qA %,aD-NFCaTS<0.|hmlWF10c(ihdAx6xWVBy58wضa`~Gf4$;tPk=Dxf:6=2ߧAa=uuH'r]u)t s_@AnܡZREiʴȽ|Pp_J8e[c7 BCIu@?76F&ŲΗ,s9v_w]`{^p1߶5^L8۽",ΖJMldVuQNhxC2Dq<,1T:Sp.+k@JlHS a1$-R&3-9@ KZ'[ 룩Ukl<>juyc ?x(2ߧS@Kw>. ݅>u4Ӣ=Z*s:ι7Pu1weyp9&n9[rN\CzJo=&tocz빓fHw@h,0s_L9tF&DF11$CTov?z'͞~ryFw5*+c>Ӳї4-農@~!.<6ur݇ Ы_|LLWv?g_ͧM.<xYM+U!d38~IB-f\ڨ%]mo9+B-K,YOv` ڞȒV3I_Q/dKl-ɦV&OMI4 YC6 Mq`u8 's) h-tp<~[p$=J+2[lTǨM385-[>VhPy@4~֧-`*RM*m *5L7 +zUw'ޟӥ}Y.i0b R&LjmhMdl)#E#s8ֻ3 d`-uAIX!f,\7y_޶/ϱՄP̾T*jՄk57XMH4~KT-e&{0GvuC UYgJT2;;;A!JelNG"X{k-cТvFKΈInLIɥU#)E\*2њ.{gkn9&5q61-Nq)x;~f9nk=݅7t{oW bbO.n$r1IfQ~-lAr4P7w|ssG'y|f Y4`yl,iŊײDwE#?MzorfML"1i lxPJu괙ڤ`͚AW#vnzi0FBddEI$Hy`R`@+p'{zOR9$&}..ֹź71;ͤt)/+azן0ktf:szԙq]y'8vn&1Թ ^|>j(&8㒉;85ZJ.Á҅~;]8H]LF͒U9X]`,yd@f0-W2Zo//5dX.A.jr;]p4c'ܩ h8# On`$|,izwgr{qѯc?&W~_xuy6xjo}=dl9Hzu9;w=[-ZŪff&Cy(uS!?Ļ޿n?T ׊j<7 3"utǟ????O|=8LDganb,nKjNp*ŧr1#%篊nPNV3//p޾>Z[F޸=FE3dLc!I-LU饝 3׌W]t8@8nǏ<ӑ 3!32*i2'=A[!K^S~!8g4ѫ03VtFj2~Ăme3w{'?g6gNyΎ\( ԹVհ(&^UqN4v8ъ +'DS #bd3{"tRN/ ΒD;AnClj|Bx P%#yM|{TczR{~.*,gԸ.  Q Ս@ 8hX?̃)_ KC"|1HDzYaH3V1)UP޳$Pc=B"R*u)/f:C*9{Oh Y Zg=$^;.fD3sAF=kD}g%|gKGZxӛpfI SY rɚldJHOHU;GdwXքq>{ebIl4wHìM\q)8Y;LYYOWEgUHXgic]Xi,@wy*HzMI6h@ 񂱆Gp2M6_BK%K/p!A3NXrYVB$jod..2a{ jդR* d葓f!1g~")MϰrI8` vY$mg4#if2-{Ԗ,Ѷ4!Dɪ)bFw&p]zFΎ{!LNT2iL'I\r*B$$T(#WF dDݶmPҵ4I@vTVPsL2.V*crEvAM糶)Yiքˉo`\겮ʭVrx+SqA(/5WSvmX{>TVEovh^2!L8ʨ,Kl#ppN e{)cC`Vy:l8=-aI801OL&RJL UCMjJŶ8c_,tXXxMQyD/l۫>?ͱݼǗOĈLR Pf֑M RfL1^FqTkHOkɦe(ƞ66e2d1"KR&| [fb jWcAm7ֱ(!Ae9Vd\2s8 )yT9D!2uuc $YȤ$+E2511AbXަ=Y,(ͥQ 2J)Ê(8Xk=OR-t5 B%D(ܱ1`ȅ˼Gbj/`%CK0IHA^#GM.*lh1R$0 fѷ%IaN<^lLy;htJz(oO4t53'1#D"f#- 1 B!>hD'ȂSxEzҐ'VڕDvΓc[)G%tM.JR#L .Z,juDž&xqR['ɦE- 6Zul 렝bوSGTpR_Mꏰ }+UU8ehQRj[va;4/"ݠϧ/y鲴at \Бd1%Kk K 듍e|>iGc_Z}U Lz#iʡTNJVҹ,YO.oJp,qt6x*76_Wl(ϦFom+8.IޥuןJ er+ELe7ʹ]I/]_GN]RmY6m,~վӛ]װVsNᛞÙu{WgJ`|7 W ;]6}S;l v#][[y93;ˏxgFÊp]2bW̻FZiz1]6ܭw"oFs]\fTu }!6?ߪVQ-e@tWBRsw\Sap *Gܳ0sR.ά[[}v6o{_ |dUuthLMx>_]u?󠕙=SׅfD"jx Ⱦ^nVGi+s{p5=?ut1AB^&k!˃wz~\8 s4N+ /xҭ p)2c.FG 2hyd$FDk` Mb'@ |T©3GRݶ^xXuw yVꐅLY8YȫM|BK|E· yZ4FDZBJZY# q>>GR!%yCJ?{rJGH׿'7uk >t9UZ)@^X Lhs,BQemyp.jf(1[]<`b>Ы`>ź0t뻛E4"]a[i43 >^hANZuht״- o}[ܛ'o'ӏо~{؅]I wX<+x<]o-GpжVa;hZϣi`ֺjI;-O9g!I%coYCsΗ؜ӂtB\YPٓB.7WZupUzWK U!؝Ns"hWZŏ Npq+-p d W#W煫Q Dq\pWn} ~s9֯ Eɳ`Xi7|un.GLQ 3gO}3MˡVsLF5@7 ! 4DQ+ԩtV=J*P%vF{m$HH %O+Z5KKGI#cNQJK-D,fW|v{9%hOqZS|ԵMh Rf5>?,{ peZ3:ުM^?M`#f4{&fvA@+n:1Bep~=s(o`qF^%ǥWl]6MiwY$KC&k 6]NNȵ̝ES-BE-!pBpE'ՂB9 Q+;p*T^"\I`bMYO۪AmMG>yaK%$\|{Ku_ogNK7ןWy qg,/?ezq%QF_J h5goH6| ,<¦s LNW׋DCUgthmݾ9i _s_x<kyhi0yU]S}-y~T{;0jwS$rv;XAl/<Ƿf|%ϔ.Vl>jy}rI$g=a0z+OSSVR$a62y} _ɩ8dVvηEW%K4s,zLRl&>uKwdwiYtu^45{+|~;$ߗFӧ]*%'sQ's[} ˇQƳ PJX]-sw};GO5i+8d7)F(,錞F?H*!!lҵ\̓<vʮ~=ۼ]ϹbNEK}_޺mr Pd:R树iiH£3VTɶ%'c2˔"'$K ldDdqn5rv̂dž??{@aY24oѾlCö[[嘝ԷQ:2&eZ!gyLFaeP Hfb_jEm3X1?T6xBF dD@` $T,./@g"d+؟pucW gr9VͲ|?_/4hm㴴dqkːKI4Ȅ ƱM΃A2 ʮ\A- JDaDD S9hS^X؛LJ>%2 ,Dч]H# Mາ:Z {!PIQetGb&ă$N"YHY9`ė 9JExx*PqtM&b $$ %fV#grAH-MϊӱtJ'vJG)i}^AIגu}KdVFjlG05XPw j E! -}10H#V%%y. )!R@J*d!fhP-jR bV!wَQ/x4ݏkǞY]Dq@ޜL Z:%` < Z(o $Pdڢ00DZں4=tRRڀF\p&?{8r 1]ݯꇁ!ػ{`O[,i%yg|T,٢$˔%ЋbWUyD#2JxJE­約KXMOP.fq3.9T.Ҏ"b/q|z FPHiR Dl4P 3)@7Qrr\agqǯ>St+*p9uQ`pUyB~oDT{#dLX.3Q{IҀ10KX9yosaX" qjRJtD43|n:x:e}cG:Ώh'uBYV#q)p^@I2NI(MGΨڲ(DbD8Dȼܰ9  E0xωg\RMԖkNrTErzұGϗ >S)C IO|z#\/6K|%(S_fscw{dM"[6MH% }R"T<!T(. ȜP nx: i=$CTCى7MeNL !Ad@3AmO!܍paս1Ip\XrU X$2MVW+sz|9.NgRk2;Bż M)J"0]!n^3iΛf^n^v龮J[G54#cQ %.HG6X_-cCj뒒W ~dom}sدio׼|?Oz{c>y՜@[a$A"q->Un֎^nܶjpiH|勂ײUm[ g=^(wgP}"[J4~ʑc3Nݹ ?S]Δ},~R/4xՑfm)4wL[K[g94R.=m,CP|[7p_4'mjqД`>sU@-"mCǡX?낳eq9<<Z}/Z=P{ݲzU4(4PNݑ2LE0"F q1K}!FGO{tE[&fv͑Ih|g.PWZf;\os7]0*Ij.LPj7{8 ۄocSMt}]8MxЄLRw ^VO y*>ʼnR_0U/% xg 7ߖWq^~`p'2sWm KcC*z)x&`lȃw3K/NOagއCf ؉:C\kJ=.hCL$]29D$с/|2{ Y]+|i4W02ӏכX,g ~*b1gO7qṮ]VP#XX'SEo?Өe7ElgE!z{nE3%|[ X[ ڠh+ծl{u"%tjv 0ZNg@*ShMo*)7C4~O8yOn<}xB?qM~7uy:jx%㺨(#eUte*Fw~pUm ^` ^zá8 mZMuL޶+xI^NoZ=j_lX4,h"X[Qw~m9.X|ݢN泥Y-|p?6QPضrtlh/E// a|?]:A<-oO(0@l>}TՅ(80 4LR T, aTR =`JHZb0a\s=#&jeݖ12 vM@Ou42Q8k\V@pMh+ 0]OJ.7 w>۫o|ZmIopZ~vGy7CnX0 bzD5Atx5f5HEq7$N>wU2FP %UpZP˅NP=T=S*UPħHIH1p%2(G$mBTx%\=+.: @`=76Jʘ$r$E/Mzy"4h2k+ELS=_j 0u]K˘OL21uȘSQ(􋊛yX>dWz26J9`AIu 0ܙ B5x{x }L729 i_ЮޠAF#s"k%@=4|NRvM)/:p:(Cuer̕B (-5Pcd X<<,Lƃ6^NL==-Z o[̺3;5F uwUJZ59SWNlBVH)cj)Prnar+;vm]$Zr;?L--N<n_[M;:6ލG+Mlu5h>OgчF:d}rm2W'8<O},B/t2LX&(ޡ"+'D/J5 z ؅y)Bgw弄';.;w\Y i$RR1)"[˧JN]`)1 Jc. Pm!*NݐWa䃶5*8fݻM9 DYNθ<2bqXB ٘qAv?919{s{"\Un895ؤ)pP}k4|.vd'HZ*BjU=:::BL`23+%NK1M(ȒiKaI>eXIv'L^[A"֐H$B9'΢  * ^sS9^)(yNS=B_@(D+m}& ٗ9 &w"Y DG( Du\nej:uJ`}Һ0i*Ia޲zYk6K 9SjIȔ#V :郳 z7gn: z?7kK>"*fkZ 5 ,0&iU%0pѝZ$UvU+ㄡ>PBbq:_ЀLx@;#g'UOc".󗙒7 U)_/lw[BdeTSR0ΌSCRl$YqDIp!"tVtf1RHBWiㅳj9"(2hP; :^+癕VW !GXU*)W|hiXq^hB q܁v2[k /9<Q#-a9;0FD1HT?uοx8pkr$܄`d.Ђ(M! VTu8'<$DG;4:th,imz0l,k={T:/21lpѦI ΣA'E}6|$$%DxA&qȔ[ EپYkcR A~9.^pE^ kզ{7>Gߍ X{uQ7Ϯ5gK}7ͅoWsbl"O+b( 7km#9KAFŀٻ>lp"}"X?3I#rxe3.GZA8/p7n`HiO/W )Fnև O0i.>w=u+'*할սNYkX5º;{\01ypRwVRm|["Y=m W4*'~/-^; ѷ矾|N?/>~0QvGq30.U)#aU6%P[_οVZ]SŪtd«rIKGY3q-7 J/[_OQqy> SE#w2#ؓ[~ ļWGwF-7i"T1@\(r F3=Rn/p_Vs:*)³**GkkHӟİ$$!LFϐ"Nr/CޥvT0lQȄ P#;0UyY.=?1ఇtpP@pq"YDT3ňu 3 ,{=W9өLgw-ٷlK ģ8e./WXEۜ켎Dg;ͪ&m i;h.K9Klj\*Nqctt/+Ψ'`ɨD.WRJTj֨~U>8O};zp_@iïiٰk!L{)c.w[pqȓ7t̜g[#v}f;Ɔ"$z\$N,{Am1&t xP u:7:]7Ԟ{|vW|jA5oex~>xq8 ~Rg◕AM`?q}n%S6d6g:ǒ@u=4J✲J$U=5@uZZǥ ,eQ߂֭dq3jv Ppx;O_m KxkbY6WST.n0pg9ժ}7;枿Oǫw8,(Öĸ'!=&x}5+z:p\AO*GVcTj,o*CD6V6ƝU hh\}0AO,d7 h}۳oڥ$.G2~˴S7[umV&&qgH)3e{">eIk< +pR@0ElήSwfFг݇`,tu6y6W  N[lBcaCRR$?u[^3×$Ѿ ɕI{z= ۋehL)EUz `[9HԌ* U֣1Mn~+||_O>8_4̞y~vz> Vz73qρf  Jr-W )Fnև O0i.>w=u+'*할սNYkXT;R+ 9σ[+h3 Kq)w/{qB/!aU6%P[_οVZ]SŪtd«rIKGY3q-7 J/[_OQq9"OҾ S{Hm|Nm͙6g:0nɾ7_g'P(,v-:/ڜ7̯du$=ylV5i;i;.mrv5:O?8{׵8ׁ1JeJ`1F13ɬ`)$MF)GtU#s`N,P{*Zm=qD6& %&DI;<09 阊">:b"BJ'ío>䩵R3[D-P֑Ԝ9j)I. V&/Z*#,UutW"yչP+jvpHoV$tJX:߇Vdt%5/PcmԺDLw^5Vƞ֏  /e6tʺ}~]CKn"#Jьd3k,PV8dnb:վbj߱NXQ4єf##w3dA!N,wbΔ2O5I9DŽRn&x9)Ƣ#"z;X+\$JKFRs6r:jWW:F U+#+|~D{Mi( V)=8 VFz!Zae*0B*:*!CŻ FKʋ=&CZ,\G~{?l NvрmP f }d7 EM(;jޤT( 2d'w: r,:8!TcjʁYdN܇WHT#t)'K!a/(^/^ʛB@h^yLB-rؘ Z+`Ve`(M#$:J'sJ_қVfGK&~vF-6_ǵoKtL }1cgYpFgvoiCv$q}(I\@an0&` EűD1&VK om4r )ƥ iIG`ɻje⡹X$WB<&*-N"19%Z9@ .2'A'%8a(zv|Sz4 iJESq&ݤ=?%Y |#Epa='u_-j#G;4s`mU^HWa(J,wN3jM爄>ǦEV?01Eu{v@x1 ^C' x&_²5#k+r;`$'殳[xeAK:wUr>\-)KnEKm 2te&۞=K y?|/k]/r>Zm<] (\L47GVj.f1LIbsQ$UiҞa\Wi;ކqr M5G^UFkJ)7\A{wF8s!W W%B߆.II84:_+͝\5;ZewEc:C246 #BzVL݆ʂ[XK|>[O\&Be&X3Iug)`_e{eT*U/sţOϪ_gvMytmyMIi[!ZUyLwx143&O穦\u} C-0P 74to{_|:n5q%Zl"QkcE#c]m7O/9RE}k@b#Zo}E<[9esX{ɍm>YC7|m>N޶I7\ ,o}K}0i(utXYipPdaq+ p tker@ M Q;7 qfaHB9UaaR" OblM\z4R4/3JW_%FB1=6Fړ bغ÷>$;!ܖ1[.)oOU{E4%SjQT޼F+DYD G' K.FQZ 5xm잸P2zEX-t@-۫D ^`A Ɯ UqbXXL3B]  =,cb+4v4b_Ʈ=5Y ^_^|/_ <bcSVk` 'L:N\BL/ oǃ)Ma_ESې= JlR!`V0/2m@X€\io)q6#|.G6FR Yy.E#f.gFk*0G\ b-*u#88f`!3 hG9AQNq{Ҕ>.&f^U?ǷgL\ugl;;L:Yu6dm]kTe3niɏfbɮ|`{ZJ̒rt0 =I{߁v_=%VC$FW .w(p{Г\ •PiK 2z0pP*IKUWOJ\?_ l$%Gzp4}H5IڻR)IZǰO4G ϷGB޸Av cB9 {UO յqMy\-Dy[ qͬQ6;BV'7} j|+ޘn|(9&tfTTa+uVTQeM&ޖLZc$NjhVw?1cҫfg&euӏr{{y>Q"I6yZcf"[=I&e&R5WAzypr( Ìo|;=@_TX=mΪz`QZR{ꬺ%hS?L:naR=pvȳʏ.>"%jS3f};0ZjQ% o onRqH#ᘎI\}0;] -{闤<ឋ?>Ei<7wAٟDr|'ꄾ?j?2 / vԍ6[ɳo3*xΟwwU%m5O5 Wf'E|nNIݣ1חk{j{z/ Ԝ &cՓe<~-(; FRgسzY[GH(򵖜N)bxЄ^/)SZ΀5Q[Ě3&b ?wh0yuS/mz]Kψs2q~SoPJ~O`tOM;KGi_BV/]}f;{aAW;Q+ՔR0G`XzEP΀hָ(IiZ۰aYpnn63KJܨCj<'VUoiaЖe)"/;GV hsˍۤ*0W=ǯS# Pw5ug7)YX1r]9mEgi[mO!fU"V𩖄/Bau_Unp_j.]n[`HSO%ȠZG*o\'סp&4y B0Ea64s26n{}W0@/"J\ pm-\V:>5eۛJ)QdI$e̵*irzdZ36ݻ?I}$([sژIxUߝ!PL~bti-eM=o1t{+M:=um㧵gď yJ߁) IM65[[[Pc;lwiX/;I`wk&`Xs7)՞]!wpGǚ^s י1߽zy>w;g/51?m%0rhձMgmLtC4PlHV^lJa^iGj {P+N*֜tRty3fqP&/τk}9eROM*[k8MQ,t.[fntuA)U6=z Jŵڡ5.e'T;2gC#+tlGV TKFȻA]?7toVR& lDD&p ) 7V"FMsFV;||x%y|{Ui QYV9 #Xn.+tWvfcD&a\+h6,+ ;m1ǪJʑcMܐ'ܻ֝mfYW6ud ŕ\nY) ìyVꉭ%ƐA>͢t,"; ŏG翣G7s9oRm\Q lc^`5pqlѪQ"a䐤X #52(BpZ@  nإ1rVh\.nRY`uS 9QuSPl5<7?V=-xZDRQ+q k0NIAž0q QYA:Rb&y$ViRDe)2| \K Yò9kCcK 2Z FE|tc=EI&)Cotq#15z؄+ ZPucGdTaXK"y/#EDM7CG|UYRd[`H:XxBP# 8kV{i,2$Y+cG$c)} ;5N:  Ȫ>/L2|3djXwded16f*~g)4Z4pJ79Ŵ<մ2K"U Pߤm,LLӏ YbT0QM|$"&dif0~(KָWqhB:`鷪URΡIYjUÿwLL^NiMU#R(rv{2b;h'(cS@HMD0[tz)S3r:xsz:+MUÀPBTa!I/0n]wQƗHEA,>N_b_zi:|ܚ|Kz򩬯.l|s{V}1u{8{׵87Ó>bRcg6(Y8Sh%V%T6& %&D⤝<09,HyII=ՃMvMsVl]5Xآ#1GGGqbQfz0.YϯD6ȈR4cK2Y[H $RDjߐH (EhJCN3:#w3dA!N,wbZNQHD9DŽRn&x9i)֑aYqGkZ@kH!aH9!U$Ҏg#5,X#gxF{ xlϡRNOw" E$# KpO3CW> ''uTO0]xc lEFh4.ֿ0lG.F{ߟv[j'  (Ðбqyq)|*0%yx6w.Ln"w$,: O${%x R4 tTgz?~$^ b.- LFa3g:E\c]hM;JiC!n-RY7ڿ>k.&XrYe1=NO_~nz{7l>3YOb.mتlsPf"!-#|ˊ5/g8}I.Urb84a*cf)O& XE)#5-ڍ>wVQ[P-Fh@(Waō Lt**Yn7B!džaҬ=]GCJ9dX c*h%{gqoR0#4'(xAc&YFp.[/ؙ.T LLpY~Ӿc+/|T_ñFpv bZz 9A 5=^~nrY(RLKBf)J^fM 3MqVP-T=R*ŀS,FK ;O 胄I1Q G,7 +t6BКfBVK׌tl87m33gr|@SP4X-LÚK}0DP]Ay)3rߍ; Ż#T{4ƝG99;Y$F6]x }@fAo(g3NCr&4oiA>+.ʚ:)(Ӡb_t~p޹y*O XTO<݌|y[ܘH0v?TDp~-| xM-%hP>9ъR?nti{쓒ӈuXƍgC didWu䦑KX4CҼiX*蒷|`EZFJ X8$L{liFvvw4qc9Fҍa$qή; ϳh#:UQf[^oj .-aZ)J%!bcFs$8[EQ5*NcV`(3pglr@@0XJBԀPEL9%x!zr< Ʊ.ݳo۲)[6ktڹDhzjHKOOf|И^ݬ7KD,f#!LB.8P BH1 巩Pzt(ҼB (B94Bz@ql6 U£ƅ UJIt7͞LG aj3j .&yc=FsXh, YS\d6-agYȍ+,JR dJ%nsuUwqT+(۫?HQ@g]@K:_2 A?[mciDrL,Cb( `D`YNZ|HJXά 3B)&$=11fm8_ѩ:5]ܶ%I-M⸸yZ7jtrP7Z˻a5oV[&Ѻ}0M޵Av@*|Au"?"UaBf 1aM2Q8`Y{KZ:ͽ'ag,>qet=301 4[ %Dl=';DJQ$jKj얌t+QơVfGoT4JC֝Fy~[͋ hr넉s/%H!_Zl4LN1$=n!{.r&)Jz%6L#d$*mjxBgi\11OEjWSvڣeQMl"Ő]3䒆MƙV(҃{/H A\J $ 4CdE4ɚXP\fEZ+;hv0}mOuٞFn$H[&M4eNyee-DFD5Œ\_뗎j15 BYJQmFB0p?)zK[ƻo\2*֖4:D4qitdnSv¸dbבbbf1EpV_ddI%mKhd%%&=`f5||sSbnSNnv;gx]_o|;8ٴKVs1{w5ͳۏv.mSUƼp0eUɥ.,0_"p~؄A/쳻 ^(!>=G54g?o8J Cƅus샶 Eh>hsm:[~}.gzuq =Vs~ ttd/ZpWBEX:1 w_2{!aq v pya(ĦmDG7[ e TM94ro[YUtVl{M_J6c±O .6(ɿŖo&)wAΫVgMD?ݦ޶aLWX%_g mG'L-46WwPnn<2 /q5b Foqen4l:}vZ*R_b<@#O1oyG&+FR|~D l?a ٴSo-HpUC]-|5T1bwŲ&}Ml0+˸uV: )0$'rR;%rOͣKB@ot! *r}[tH1Ďb*EcHFXЉE4r1 *BP`<}9hТDr O^Vq5CSzZ a,:$3SVl*HT)S\-2Vn*HiaWK7?Ofo~8Yvln45Zn˂ 7TZ3L6:޵2Ɩxٗ$8;:[5 jЛ?;God1cv68恡v Bbj!K?w& %M E>s\e˪~)R#p 3'Z?ۚ PyҴ6hs(aNXa^$^Wj+w1%*,~n!5J ۛ\mB1'fy~A9Kl8F[^Gx%JȣT$:P֌Q_1Am.ꢂN9VIr 6% K?IIuoC$ծ~9Y.)њ萌XTIFt΀2h-2FG=z }p 3++)K߷/8fy;YIaocp[*,F3F .{g\UΞU['TGJCMZk;@u["rMqE[ 7mZM7b~y1$eڮݶO /H[@^ng-ow'--yv#ܷp[,\bX"/n8jsYX/4._'(jZnmJ)XDzͷ6|WV?.XW1GUYcZlvz7XGrtx]ݰS)zJcvͩ}נhB*Gwrq-WJl3+fRSD)gb :,IΜOo*ad}4a^DŽA9eROd4|a=fYm4T՜mz7Ms.rjdܐ1b5YYH\{69I~xrb lX߳aA g:&HV"3Ytr3ZOo5m1XI`iWB[/-9d CG.EX ןtd77mS6D,+'c*ЗFڄ.f5xm)j ,ʼR =OUSs/I1+c:r 4em$u!B&MBY! F>]PABbLv@'ϽẔ31"3\y;贰).vjSN`A<5&bry|w i èN|nTFKm^*Ȫ,eͼMKMpBp49'C܃V*Ax (8P&#^ջ5};)>5lo՜\T~ziL,ЕN 0f!he2k[jC6BI/)̚E¡󿧽3؀\G"͋z7b+ƍT5 [ZZ#H%048C\(0-#Ϙe@I¤%ƲēDZŴ-B!*$fm̜+ j&3cPFs4: 29xW FʳYZɟꝕ'CDzrr&2! j⑨qL[E?{[mm#0nhpA0M,):rw9z:g$Ye)f Ėt4Z5w[B.L7TZ Ql>pΒW"G,D!Q(#6hYɻzz|^d|H$Y+mGY ?JօM$W5AxOƧ\{\ A"E8UڼuNTJnc9eNJH > Fs*CzHQچM(%r'TIFK)$qu4~0JaHh#U>ڞxx5X'I`D)H%pUNԴ/MIbfC budM(ޅlOU7_*KɲoGTjɣ_fUKk\4=5>Prպ %NPdp-CfKBjcn5,& YVF\ZP26{]Q`3F[:@ `WVb,AoTz rAŸ~kJ P(Y("3m(iKJ!PS5gQAŜEq]' sGX.Q!_ jxmBVV$ wG}TP O×Ud 2in5 nCgp6[7 V"Y ԏD?jDbYQHjd*$@dr*~ s@ Ѩ%P|̡Z6@ @R{CHd.a3-EΚ } 9.z-r K] 0h vŌ6Xjv!iyתB6ozCWH,2Iƒ5$J%޲YPRSi"*^`^`-AJwv_%b@U}76AK 0lrkC6}y"nwi?_61zhU/jcf* @z qtUjѓTG$vGc'A٢mŰ&R5(Ur$ZVK ]Byq0wFofĖT,2&%<%:`rh[R|F ݔ$뮓]RЩzPE1K[=z"2Uj`-X_߮dr5vdQ 'C+ MISL1& xrUp-9,Z~E Y^p[0p0) 8FDj,xHŁv]vu +~(]ڈ.jS{[Nk,#5~iv=X(-S쏔u20ha3RHB9yX2'ʮܳ` 9JʤQr3ri"&r9P&t,<ʬN>FQCtZ.]XwSJ- ]g2OH]ozgJp_]pB&g[x#vt7^"MO69R.Q%>YUZ ѩ Db zMKtION24nr!#~Ñ%S3.{kB5̍Z?'k515y p D٪)D;~f)ZRh74n F2޴ Wj_{v۫߷[e8\D+lj[=濋ua8`Y]zoq)be=/xdObamZwulٙ?"nPyMɝ~lV?a^Aѻ.5/Q֋PRH-}ݴ^|pg'j2v1DBukZx/-b^՛לAuuK^]ߏNg^L{W9Ns?1߶?uv xwzxw[?_Q^wN)ϏKf7 c{h%kSƿڹvMg|yOӷgxy>蜼y[/h-%bpF3x&cp4j4.jI[hOAEG_z}y.oUz~Kg]Vެ%Yrz>Gfw#_N~]w=+w6o_T -o8CN9j?~t_囯?|__}k|'i%4ݥ}CaSww?NZsS۞W}6w9oy 2}8LfHW;߽UzL=×]s[d" 4?<_!aT^\Bz udy@x 3]]o_F~tM|G= HFzr6#"os$ v z5JԌ#;*{M).8L HfVuHl[WB/\_3>'p;es'زĄOO'?πŸ%YҎѷ6ǡL:i{Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz複Nz褿\AGI'm7?4B'hy:iDI:i+^'pEUm> Q74~?|OZF9oWxs;JG7okޙç_Φò|ƬYkFͶ ,EdS`eԫD@]LK=]%dN2aj[[H#$wŮzfy'Geiiy'Y'KIYq9_<ʫyx\j&FNP6,CL:\'.!ba7n\Dh_lI~? \.kn0n{3٫"jWxw3~wLU1jjF4u1ULbSuOMU$X >`8%;&e2'J@\y5iPSuԶMĹT7 0ZDjoajv;fE3^G11?,Ycr~޳UU[qMPL(1c | GH}"QF.>ag >9:8[:B٦ރ8ߝm9w\Dj9 ?+*.%w_eS0w_MnKM~<=}$#v5܋D.G Hdkz(^b^Fd :>- :{.0›qɕXLl |;2ƕU I6C %݈LaL;mAbަƒlx#ZE0 3g-Wȃ)=牸aߒTH*Cː QքHbp\Z5D&<8wavӏak bA>󤽁ߓȐ$vb÷P04)a2#x(qFď\ڧdW\"-Xp6;T*Gr`a5Z% !6RB70" Tyːpc`ڱ+톇aK6;O3!A06 58dɇqQ K멽 QlbrjADS*RMakɐx,A43ý^gU:q)^=" =`T$IHﭐ:h)MYzY=uӴYCh %Խ=Qsp#=^qw7]o.3,[2aFDj#6*5ic9(Fq 70X.iB aL$C"fKLA` eCT t|2}<9OWdu+ßadr4Ny^&O z S/ QVr%(Yq8k]%o7qKku[cb9e'kޥyMnm##m7|2Qin 篶Km}MR 6b3طj_\nv߰ˈj\~+--F1cj%yИȾvժ50RRI ss[w!7 n_z5]-Jk;0>k/[ ѼW[Ս-rf_r5@V[j˷:ϣic+N۸ WӔ#goy-T1/?4HeG`?McecoΟ>}z}}Yrk_ `u>\ޫ Nh߸p:{8Z!gZz_'.]p{d2h2i]Bbx36 A~6 Vٕm\mAb"Er HwBօW͠V?V_e%;N 7M~qyǩ->u|"q5wLP_G203OY,q=ŵnnhs{ӅZ%n|dEYuwݭw-{vkaj֪i}lan޺3- 1Qu9(F}Lm7xfBNh 6IY¬Y=`$NHXYMH}/ ݕbC@ ~ӫ&)e"V’qƳ)c5ZrB$%=@NY_^ P+!Ja½Ǫ(!u*B}/\R_tW{&gߢL7'8G-/n2PCLG ,?4>H\AAJWt%%ϰ.dp,s :eodywptܲ=}q#~@C/P.#!\ivn@np?A^7opmi {ƆA򽎫J~U641`1QUKA/Y ArW˲1 4.EN>.O珽Ԁ'Aoe%+kktTig X_}xeOEU˯h-s8gY͙uH64Mhu#720` ̫kW]ڋ S քRq(Jپj(PT{z@pe,#R \ \}+RWuAJkHV"m9Wi^r~ y~R*_pWծSOPfumtfc:_u ?>9^}V͟T7ͺMV3Qy,117t2oo 0g09304J%)w~oF(`~C4ء4J+ľ4J) L?GDK@`++Jk8dxOgWhɺ-?Bq8(aWBDgWJ~HgvR&8 ${o]ʚWV7W(6W(W r 3+M5 ` 5P d￈8 %Ldb[`@&ӽ^I.IpCf /DXc^9 aDTLD` 5V}7PJ͋ 5˸ ` UP \WGMJ`|L.&~}T_e*:]WUX`z^y-a%^VIګVyZ& /]CP>;tM5{?lGMVR?iHn,rEx=uS?:)eWTQ2:\.Z4zyί/}ZL[2>ѨTFdHT<~.ʄOMP=úz5e_L7Z4y9Igʸ]/{~}`S.rZ8wп[1^Šʭi)'aXca8"`!-!ނr=:QK"ْFq\/r`AX' J>|ߍew6} }' { jE t?rɲ=ZSCQ6I+X\^e.km#GEXmdj`r2; ,y,9NbbQ۲LJ c["._U`ǗKGRݠA` \0TLp(9w\ĜT@Sm (S<'UUVHPpEޚ`୷ȈI(g81h4Xe96AIRw.)dL9L>M&F&$Kx#`GXTGRlkJ4KڅP{G%Z4&敱[KArIk3}B N賢ԽtXT-W-c4m@#xUJ Y\#+*MvS^HJ Ňt#qWNhg.UJ kU_<&pvoY)L&B'b# ƒOՐNfꥂЂBNo/ZѐtɃWI2c$h@y`PeBG184*4Vڴ}σmEZ6D޳;_pkuf"l(J}&WF2yCrt"$#Gօ^ > 10<,- zo Cz͊ME;9y{CL-1\\B3%xW fVqNpL5= %\LD2׿q7> qrx=VAJ'oA&dH |%QqK@ 0&.ѩO<>MgϏ5k[͒U]Kl kr H"7~uEwx~뷫ē .|D;_Zxv~qQCA8G]4_L#K;O1lv";qF=,Qm4osZ=^Bkge~«Kb^i9<ڮG竢ǧV;JHƺFs$t5Ʃa֙ mLIh*+>>-zz3|8crf^n먂=r]v1]:--hRF篓6Mj<(jdޣ_~`2!6/pxqF'RG~|O?߼zw~| ~WdqD+0KHFᨋ5Vٗ_G}_ ?>fC jz໌jNc-j֙vWp:ߏ\nwbq/|dM`.ȩ$fNV@)e.`ЄAphcPQYtL۞ƿ3T㉍81ʰʇf#>8_o#fҫMXէTLہvLWOY?AH܋]w(u:xmd &hUy`^/\paZ2FUP {za"r-O"qjthb>|]o+eLe1YW~s(L}(I7C>tO;C{݋.W^ =StBLA+gpM\\ny;!EH ,JAdJjgƁ:sR$o!qFfUviceؐKZoCm/jS`ךyex)[?m>\~#T[SVZh,ӶC@Fa} ˦`Fd@!("&^,;~ eFJpּuUpɇM^U\KcR7|n*VUM$ F3Wv1$5X6d,1t&I)C6*»@oHb6mT[Zܟ@b7r)޿y!F+)8cZ!4&qfD:q5/q>(Y# )M<ƹ.Mn.1y7qkp2S.w՟QRݥ^`^>Т\X9m'YW۞_g!v5iP`gi<}`^E=z4l~ՂͮW?7ߏ'Sqf;m7G@yk?mȮ ;:[Ee`Mz_k{l !-˔Z\~:n (SWIP+8!m#I6ZQcwfc2W=M7Wiۋj/j")g4y@[ϘRyIf> :!DT`@7 ntp[_ r<Pѻ*dV{S!WU#g#s YĴ6[un.]+N} Q>ܱSO)f^ALW_:Wx:.T PU!Y)iJ&R'Ǡ{C(T&9qsLGc1 ̈G!24?j^9ifCC@͂3mAƙ k/z 8\nb L~1=YcTy|2ng(dF(e#%ixRO?σZdKBQt!2D^cN'L) H0. -F)ԲXS,?Y^\?,rd/m65bll~N[yϗ~I $@ AA2ʠ8$TTF4mVbuv*(N p`?~σl~+AH/8-.Nd @o?oᘒ_Qe8"sj*rZ*j?pʵj{}e2\+꟮?G MϴNEc2dp)AL)04A0(q,hs>~-'~O-G& S3sЊ6`u0KhPLv6)2,'Bq}K,FChShЕ1w>B8ʇϾcX2A?W:gM/1DO0^(prr2Pe"-s:pM&-1@Bb(VrT ư-5ߝ9 j:lOJ0_xlZgj|WgMTΖw.? ;Z'jn@<9kV#yR8&%kwLR V\8 d6=gSTՂEtI,H98G<R2ݺ֔N!0<3pA%,-L.S9|LJfFVg ]e]h{]pEQ ㎢;Ǫ%rc޾?MG?;Pf.# הfL1sr %P%"R7ْOKP=6A2hpd+$+I?duE;Lq<];Ek^k.ECl@"fs (B$H&;+!A9D|cYi1X%yȄ $/!511" #(UdHNu2vN5rׇS,WhjFF׈F2d ?$2,5OB`)J,Y,y^K u 7XΘQޕ67r#ٿR/kԸE'1Y۽;NIR{b&ěEJh,*$^L,g($^2Xo4( %OFbu uC,՛Cmy5Yf\.v1!u P 0@TpHd1a9Dǀ<N)p)xXvf<$ l&$|*ȭ8ozFQ:owν'~ᕎ llg369e](;Жd a/t)K q-P-2~6:{H?U+nYu~oő >Hxx?r!\0s,Z {c>ޙd:{ ĥl| j4uFmHJPT+ωAsD)RQ ,5" Q(5)Ɂ Ǡwi͡n{E9Mȸry_Y-g_w˝}qAEPy1kd`12OM1a%J`! !I!rM[(r 6(6,Q$~4>1Q AQxmYsZ^VV`\MyD4>2H8%̹(Lw?L`09I-A& aExhr;Y =6j)rڀ#R(Ɣq g@=`W`)0Ԃ<Oz$Sq1 8ޝ9Z?#J31fA[AΉdxeVhr4Kέz7Lejf?  mj/RTe׳&T \z^<)#8AX[ၵIt+u))E.(A0!LP9t0?rqZNwN֟Rd)+Jn)kݘ('>"|ٟ{h Y:`祪Yutf"?eE,2O_d>7ѧ*b9?eczL㌻ +?=oi[}]@ ~K =Wr![)2Tana~CwOơhO5aMYݣMxԊa6T(Z 42V)w~=|7hK aZ#t~(ĐL{s񝩳Ws嫘bzk].AdN8]hH K+< 3ɌIu.h`@ts??|ؘʟӘ>o{y`ј:skoK!W>3~O km>1hU_*45]Y?|:/vq_rȌoc8H* y36FW f͟81p2fk>&Ƕc@nN,Bex.:n2DJo [\.sN`c$PU"U wjtPUjzo; &oUcZƥ2RF2M E{N, f~}n}+g E\;<N "1*FD͌!1:#0LxX0t_jF7_;MЬ7<lC;)<)M5^L-K#VS ț 9zI]arטuơweCrF U-PJ~'u}G*Pp]CtWܟ+K{_-u) |Rs|٭s4WkF1T)b+t@Hm-_jRz\T"X@Uv-י`Uhvhx{1m>{ ljC.lmݧ/v0|![IUROl#)zđԺ| %pLCf6#g3Zb@N+т?vLJ+(!єTa8Tk&g ZGK!XqlB1b"hj/}dzYk501֯I/!m [?/!1|yS^m,ozz qT v[D8R#kl-9ksmtJM:F {3=ZYK0pa^!X&HSE57V"FMsFV;|zIZ נӛJ;,ll" s&b̄6ʰCI17ѝʸA(=pQ|,"Ep+RD1}>J`Jn)('.e/ؿ'ֆ)U I VCcpEԼ(Q//>zއ@2bؼL.'YEH>7;E8J H|x6J.^XNݑ+SM0eϳ6- Tʆ&ٳ\XE5{ci^pσ fT eƿM*⟥R-ߊ"]f ӼԕrN'(/*z |O%QT9ʾ)@P猍>ȱ0.w8 my=juʹj]l@^NfTT1 ,ĕ,IJQH4~ %<@rFϵ,wJÃ&YnOq|=Khq^>Qh2 Fy@h "D'"8țhc0H!cXb~sVA-m=3{9A r3CWҰ5Y.5eI?fTOwS޴О.=ZykJ{-~nӺ|b/Rb5>=\ۓEx=rÇgUb-]&0%y]$1_=_U4 |^S[t-NGIZɚtTs:zNG1TyҦ>=`Ys;Dk)}Τ( eYY֝eY 01x%Hx)(S$LjʨG"E;$eV˚r 8M(rneZGEdQ0A[X+/A4-nYBK l초&Cp i}i]gY j=Gg"&H-]ib~2$yy8zLi'(Ƴ3`A2<T qS5~FO^ }"H-QFD,Lt\T\Z'aٻFrWtleElp$m dik#$yf[l%[lyw]U|%r:(a*e]D&sJd00Bf2&~"E \.|J'o/O_}ن[}lkyύnǟ~nO ¤m8D$BT\&I̡fISt;]U u. s}!Ih. (Ȩ2Pق߂⬯*0ț{־mhwN2bd/,fu1˾]&`GhEixlf3+J)[ 6Q0pTzy$&PC(>T.`,L|h  ҙhe.S7u3lJ!%pEɳ@\Fw eFN2/h0h5qH(w¦a@ZƧ?"l9MW?h"t+ ט$d,c3m^ojufŗtȕz b6~5eoOOel\ζz+jR1.` 6%ꀳl'=13@PwM8s0K.i J .gH 72pUj{jJ5_XM3/t/3{hNBD *s.Y#"Ɇ{[R?iIEMO#Vӏ=`u=bG8@f &~`ImwY.U$TP·$SpX] QN-`8cMJrd(Qp +(j0%~<ѧMo\YMK__.$&0c" ԚUX Cn';cSմP(*Cq?< Ni(~eJ9rjamE?n~|G_ailTmFBУ!YMnSqL7Efǒi zF)qxtuű+R?CA2 BD%˲ĎLfR2W@,hRL5xvR: q1o$}\0m7O콛޷6}-o4DeRybY(5Bc)bYRp\IcFP#]÷$! /L)uNaII$MGn('6a$UY@Tr,('O:a#- 3#RUGfm2WV*2 cfd8/V8A3zDB5>;S(Oɣ"hS9 y \pQg==ǫz"N5J>:tlEL4^le+HjtW6ه}B= fvӸ_@jXBVK5x1.F,,r7MT{% d,%§ !@LX?$4w}`CuXp߫R: K1yE"{?SHfĿM[ E\"{coL.g1/}]w~3':7w&)wl^|rA=H 2c4u4<-K@ K=,v>KfG=BvҶ6'm\k7m 7}u'=}Ҫ}оO>iouߋ/-旗[-=ɋ:g(;5g^͡K+a2Xze a Eb _* c:@xoG^]cwxƤda5 Eu'zsHC{Nmbsio,X}7-> PoJBo wҏǞ'{~w >4'-'Zg F@ޏsPLEX|~$OcaB*.d ƱMWgmc`Ώ,BO<7twHэ{f>o9ޛ`gI"P"I`[W= םQ|=9`٩ _vd::o];<2fu-;0tq>WPhMm;Sm)^Ǻ~B8B|ZŤ)ܑ<tXNxQFYnj|9}U-jO ;~YPJ@TNp 'M$D#{Ƽ%H a{x{~2.?Y69ivO,dp#NѺ"]s oYH r30ցuӵLѼPwl75zi\#iFw? N˽e}{:@{U. Ts릻;=vy-_f0߯K/(0BͺVq<1z^4RШlE1&Zn2G~.v/ MKcJ;_9&~\^nh ^q;v V{XmpQw "R7w1JeM)$&zaE`^:u)rLCpXC{W'AǧJ,?#y! #?❂DMہMeTJcBDlFfw8.Y- 09IoΑ|6 mJs/͆A9-&t୔ȃƢ{sMBG# nl`cPҊh4pQy3L99pF _XM .474mSM"ɔ#tY4\c}a c)ܺ!uq,^F&IX@@q h)dG9=rR;!%.C,}5%bpE)I Ldf81w`*J5qȸCS[ 6GH72=B`tY!Ja5_5if ]MQD'ln2]bĴ' YӋx-tZRlWb4m}q6:!H# I ǫKBCȄ,XJwZnY5 "A)5"o8bᚄ. c1h\|@Ng90\Nooo?`*jelqp*t/gO|"9ſd&|2J9F&) NHXJBVJ::͸QqAڛypފV&GѶ̼%q Cr(Ѐρgs`'tv3mA>KSҀLj lJwУ92[{k"YrJGX9&eHJ.KNl#'w"Q_DS[K&ZLd޹6m$kT*ڀK\X:u|֩/vpHEة ^DɢL1H;3n%(hDkEb+qR S D;Y}p ȴHH+++):KWZ %'\ 3f= D 3/(.+w,7 0h~*0{1_W[ͳpf1ύ >^5;:@bv|CpʧF竪2#"Z|R1OS\_Qz=˲DwqόSE@: xBF()jp)h-u6Hf"A1EdUeSdB)/  2a]q\-,]QvVmOW.ɼ'͛4a%t~fNˤHuDmVWQh8<_\nS㥢pPdzAQt[e7&abZ7oQ\ݮniyt8e5Eⓗ^.x(ț|>ŵnLDy 9⎋3bۙ@m=gv4fX^Ѡ͋ɴ]A|NVzvsh@[ʅa:몑κ^H/3c[HU WYUuVD2blʁM'Jh_?6,>[2ǯGg/۳g߾xu&;{g^~Kw ̋ma*~>9ߟ8ԩ|SkMr{y`voMNFz䪼HQU^5)"ŸQJȾ4E'>2[P]Bz 7kKINx^j̮C|c$>1HTBZ ,d"xj@fpNY &+PqE&ؓ + 1ӊs CTJAA%s@OE)xW^;gstYs5V/;M<~z?=~>+;_FFvm;l1Q)rQ"b_4%3%2JJ䭥+tL]om҃ gd;$=~ǚ~\c6]\'4l>ŗdR|n -@' Ic<2ͥŬ熎eosڦb{ &FY.Hw0MTBK-\jФi%d3M$1Kׇd_sMZ<~݈7n&R5yi2YbªLM Y!Ro5}ئ[6 e]iI%tK 6Y-J\ N$ ,ZLTڔB"ϴ*Iv@`r>?^rWNY=~0)qqgMo)2y+.fANNY[1KkC+'O6mSN}7~qkQ"NC8*_}ihvOLiuɵtjsqWr%-)%"wˋ9oof,k<[X\dӫ{xb*#/T>;YRCHLfK#zbza/J'OU\x}[êJ~*qc=ӸI%JXXwK:{GϜ 7xި|}Y.-j>o.Rm?v }E^_oVIL賵ѝbÒv7F?.ުz*ZpOduUG>.;l! zvHMYAD}^_r 1K?O|t׃b ^<XPpjh;8=:+NH X2.rdWҘWݦs{TZ XmǕRHqA\91 \`%|0b*`Bq*M vW^B `pErAPpjuAVOA2;vUJ!z}WjaU=Z*[]2=Rr̀pł_'ךPpjn;H㪓rtf͇K47KSІ}fuEXAĹUP诂~o|qhNWsLsYJ9D3?(!ۭUFi,WP0jzwf RʅU`o PpEj^?H%l@Uwpn.W, YPpj}+R)e]uWp6 \`"\\v\J%9.pbpUsVB0}g;Uqe50C WٶTZo"V2Ȃm8b>ARo;X%WĕU@"Ƅ3wErPp{]y(erur Cw;䞐V$UOGzOG|Y; i"G4o,hF(}ڤICnmNg4PwVeE',쨲 wh{|[u'QРSW~%K+(xWTPj㬎/YZ]L?!rы76FU1resiwڡr<|1vKutL|ޣ QRH8j$W`vڒZ wԾG~ y\,첞\pYzj}CjTe9x| \Wz)f+ W,PpjRK㪋RrW{ +n(`ڑV۟cn0C0SO0N \R-/X]ht+~!~(SkN';auɡ."}vk٦R}Qv M v3@rL-Z0mZ`S ZJo}ڏ+\bq*qA\2VpE W,PpEjmԽwE\`E+lT0b6\Zm{-m1]ĕP+UBŰJz\uWV}@"ʄ \CR\J"*+HV<#jѶWj$%  UT;վD =:+g *$Ds(\\f՚U~Yq-1;=np-JmTO9;vh~e`lYɚrm`= Փҵ@ =R)m[ XH!ɕbnuZX Ƽ1EYB)PLDK-u`j2^i]z-Ѐ5BDqUpugHח9i'+c1Dcn1h"\&Դje2.^Q_?{ȑ] O !H60`% SHcjRgF-iF\ 0f[]eDP M;;zd vwC(QǤ3v fC =O-L[Jzۈ)|]_!Xz 5/B`E(/FqdJ}w GSp&rO}n?vv9lךF(ތl>||Owd'7PDž3#PQܢcO薑y<7wg?f|`0o>a?K4#WPOw}52+ Ml__b\"Pd06D4Cc6]DUk%Aiɢr"k-}aSHehB"=@y"s҈&UB)5 F`?@.+j*rlwhY`AC@ Ј"'%]ZMReҖ  ,YR21>>Oչx ҋqPڼ<'*RI&Y0ڢDs$עL^ܷJ0%(mCRt5J!IU 5L0Ǒ֑>ɾ3_eAchc߰Mh+\`5 j%6 Bb6 CVW(Q3 I:'͋Y笃S}hhU)*lԴ/1KK'EQ+ ])(T%JKEv)I6F*A2](!qр)k|BnxJ v+ **V(:Y-;y9kxA9]GDaD TCZ\] ڢI c!6V:b"uR6 ]k0_ƺxZAr0U!0&w(Xa=Ҵ1 Ki)n`*P*#rҐ eׂViCTc`q^b ~J `sE ] * F dFfC2"Q6p.H#`AV¤꫐$NP D(@vU(Xd*z]cJd1͐jPoBͩ X2P(SP|k(P,Lh<46#]3o#2!χJ[SR : s<&`&b:1}l9ǞPI!0ά 8O 8XS9(i3XCfұI!]/Vj M e*3Y!(@Hq`Q g (Ez@ߐP[ (ĩI(Hv\B*^QZ#IUDI)beQ7綄 1R )~UDdy-dJ)dV."Db0niXE.jE},B \BиA0WnЋ q)Y3IC@QEIJB-}fC<ء3QxB%?4'c\6go\(C`җ`\`Қ+V2 ]uV ]u>wJ-HWQr QC ]udlϝ:Jo0t9P Dէ> > R@WCc}5VWeSfϯ{#X7v/!.^_p mE-zu6F_`RzlQ_zn鉈_1m/ȃ{>f o=Zo9y sD/u;[ʏsn ׋rEr&Z47@:|&h{G᩽ +{dm0p8snp=5:~ -.+Tn_oН\j?|-V ӟ;nz^_/ːS [ \l7ȧgZ^f< > bߍ͏_W7o;g'?/A_5t>o7{am#_חem/'9{MWgPYg!6GGj_ȕris0^jMut=5ޏѸ DV lD}?'RZ=<},u/g^F·`]r>樽C[w2wQ@wek5+ÙVHZ-;k/k~yX fg_޽ͫe4Zo"7D2 ƈͨE geEdi^DRa/܋S\k)-PB jGr8@dd0(Jw)#9~݋Nf Ӊq) \O9bhSr[^-Qp8%iM[m1h9Z!dY6%ZI%/;?86vq. w8-;aov{lEo7gen6WPy]xvwwY5!N='W/dZS#wʓs؄6ÊsW1WPD&P8*XheКfFVn@yql3Ao6 ALɉ*%ze%,1^mz}z"B| {lڴkc[~C)t0$]tPp މyٴg!4vtg./+@(KȐQ!C"[e*PJC(O<6RiT¹I6 1ҩ:|ScQt NY!XF0.18E|щZLZL5^4Xȍv7.f(zwD%wڮfu_G^ ruVLI6>_7+trY{ï:l)oUjlٷ*Qc3B4f<)-fF ZRज` ΁1a|= ZbJ1[XL3N-ԃ-<-\Kmy"574..|{j{N&=b;E1H]$@5AxP/抏N Ðɞ 3(l2乊&22^Nt9l]5 /Vvjjvkc &k|)Dr3k $אV @9KؖKB*CdHx`Ƣ["F6 .Xȥʖ>.&a{XS9g:E,'ZDZĶ"yװ{i^Jt0ks)c>o1FAc0`u\\qCL8Mθ<J@ SU{Y/DHtX̂xNINI">`abiǩ4{xNԑM.|Ww1sv(Npx?RB5VŽOS9 ,7J4Nf^FɄ8i"q,N?hs{g^͟p^M&#z?[ lXVs1o1D #j^rtwj~LƮ޷ q+|eYw6}.DN::k5bN@ˉ[ԑQ4r~;s` Y˭00({=ehlL3]0rYMD:%OUC2\0>rA#!&(ŨəV[ 1xFsD*V`v[MSQO @TTݽնZ&n.j祦:g ;"4Qf@JI`&RkpI'\1sۈaSTeLO,fy"MdX>h\[/l  J8 $J;X"'u'Y/nU}6FOl9jÁͣq:hNb :(!azf0;&6蘳!CB fvcwX@hBK%xm] eK߹'H 'R> rcG-)&&˴:x*ߌV;m*>5MwR~sԺD$.4LL0OoTӔil0ʧnsfFߏ5%objCn^|ҢA0/iWmUޏB\;x/";,N}}egQ~n1#&]㖋~G>'=r񀮧r[3M=Dtޡtq6{ӕMjY6l1@m} b M޶Ǽ,5Ma&uσVD}`Žu&BmNݷiGnrqWZFv^5MI$ n  @[+ :eҰRd}/S/TЬ`xWv !& 4M*oKD+0*` FZ7zJƩTOŅU8`Qʁw8'fD1\O C58[+k>Ttڱ6܏YJHTJZCHqiltUrU>9Gkije?xQ-?iE]|`(ūja ǟ<>L\q|?HZv&eЀyQK&1j!=zŖ8kTo5xS/^.Ԑ.5wV?*kMXwf zX^VpwQۆcތZ?o5ntf!TZ=`VwGM 0z>]4 <`5?sh썠u{r>}j8 F~-l^}<k韞N kZKm4/ވ^΁|wsGsn{yar "%%5@NRI1[A!$Iˬ. 3ښ)=N&XNNJ)76H 8^ YQk,tXs)h@IbF_~r$}̞ ,R#4Q4\cuݏسo>&/"f8Q*4C( wD33效D"r#HeKnb< P gNϝV;@0 NLpJ1qdw!Hfg *9 B*$Q!3EpĠ:eA5f Mx2]Z4V ,}V(̱ͱy9׀b kڹׅ~͠a;)gH-or˿B fh@\4p*f8:S=g>;IbX#Yd&#s2 ^Dm bZ'{6a~Yn(%`pM %i"KI$YjiDz-ȈIX$5b28/jG:4*thش=w˴h^ᦳٖ1&fXDB'$np >)Q&YNKhfz۾me~ HFɣ x`!iyiDQr%@dXk<xGq>5GCpnci:8]FOh$q:Hg{@ߺ)L%#PV5@ M7ip}j)yS43J[ @*UPqEiM Lg5kek~,lUQngɩV7֗wM{oM{[] (; 9V/w2#YqGtW3 ̭h] aQέt3YK5!C%ֽ$3%9wt/sw:Cơ)kI\UG'$XXN(0+3޷P'Zq4+kQ~u>(w;?w.mI"hd~7m*19{uvy(t5XVC?p8#N>ܬ@ZÏ3'^\^n/9)>_OQptFoW2%.O@ȶm =]lmƈl.7(!%?w1FO}.{k. {_AnuX Uc2Rė\3NL=ڥ<蟷e{&~76Ow_>_?߼Wo7(^o~};_̢AX*O=|^%ko5E^k~ ~KcvW[n)@v(<ΟG.ˋO<ʊث]-AV2]B',ҙWǝ'L;~7UfKT 6PPRfuA%+1m$>ĉH@c΁Kf^ 'gv\.*ϣ\d|#=imC;W1,'#&A"hTby4+$S?:$ oTLV*[O_~ky}߹Va; AE@CcWd R"5+$-_q+8pWpin(j' 1lVEBrX " -ZDixߓ"$^A{& pqR/ *! 3-h3ǸA$Q"٘qAVj=[A#y|%!)o%޾{^.sT8sMpBEu"ZvXn1w$cNRakGY^>Ñ8@% 7`Z,q$O'=q`ZhHJ%҇VT"=0Y PAK"(QSTs$2ͽRexO*2΅He*6 L$xZĜCm1%UUa6 }ށS\~5}޸%#iAwos$GIbTz7_=..v &(ٵHV4&B(TCao̧U7t۵(L3l'a^GJ| (;YÄBHEY"x%r-7vϖpH},{.?r |o3'RVף[՝i!ui ,$V'B(! @6n(<9pڐR! >P1@#9G3KbL PFUM$ۗ?v&Ҁws((F!.!Bq"zC_yMrwv*Pc7.Γwq̣X(2ŭa`Fl৫`yEw!$HP&ô(͸؆>L982'!lؙ~\^Er-BnJOuӖ[?`;W$Y(ךY +~gMxOøeb>U y8 >ӄѥ8Cj { j̃¿|6X G_l2tK&Uh&"E"RHxb 'Ary`NE,Fve˸).?O!9ݦkfvfj:]Z Ό- )VZS9 )%dž"dA*%1 覸-ePwBFDZ8AAtKp54"tQNbQfW\_ UF+k]!JxKW'HW$MzΌgR]ec0U׮% ҕP9d*6͡+_@#MUFY[:]I͈b +Les Weԝ2JN]e5 :hY탎3JNr+,)42\heR)ҕ1X X62VVw(n껡+xC7qxPqv䱍# G2wCkv+hSIt wUPw m$ F9ywk M#Zơ4Msz 4'D BUCW|v[BW@q ʀus rBW-=]etut%57 00p42ZNWj}vHWBʀ5i ]BWeFkW%)ҕOҮT\5 t(!-] ]i*e_T=D\6FB=T]"]F52`CWWhYZCWS/1Uw%wp']VվJ]3;ЕhSOX>5}X F=s˄{L@'+GVC}$&hHtDA10zh '/2a'sDFEB\h&6F pgՎs"FQ\Ac *<,+[T:IYʅQdݍ$c,q+4[ IGu E"(v$u1dmnAW׮úT.KUU\Eۗ?ώo92B~}ĻAx1&p댽Lbw."oi3cdwRԋA`R^{r+&8?lv9ELu WvPf?Q2 ͆PfNIi^uEh~r\Z!5}]D;o(|,|:^wQ54΀$։Z'.iH㷷4M(6ݽŵF.fW~i/^?xJ̾3Ar6s7 ) >xEⷜAE1%%斮5#7fV5)mLQO4e*>>/&zrf?18;;Hycnu1mncͪNތBGJÚ篣qz3IQ8gWF>_w*,?qC ?gg{ؿ$q۟_/˟_޾Wo_V? LҢp +rh~{X鿽u 4hZڧi]s ߧ]Ws[ݠ>ֶ֘rmH7_o>Ɵ~ wynYU]Γ`%V,4Խ/B%|@yF)7(@'+_?co H=B_vƢgHVCzPσGi}$*IFk̐ QAHG$l: TI%Tٞa.1y*A?ap.QFTDR|P'U<. e>of.hvU^?fʵaӲ4H^.ԟ0X-N8M'kͽ[T*E^mG\_CwɤǏW΄gJB];vDHX㳣Ϥbj0ԗ(QD3,4ѰL,(QP"8%cS"8%dN8)b[hZi4jɉc, hm_^IR{ΧȲ֡@ˊQJ`wY@l-k,YTDպs?%BD:o&$0kB}ydUm^;T@gTAtgSA8.Ko/3W٤֝=|cCͧ'dJv}/eZ_6~繘j{Eޡ9YL[4gJZY&IHi&U9LcVU[MEK_qbV"^n*E"{=KUJd] Sw2XDϷM?l~]hJHY\8i:mֵҹڞf43hk/|͇?B5ehDJ^6A ΄nB6KbDƸAClXJ &F̈́eg1OhQ !*0q<ЫkX/m'7_M/FYeFK]LL|:6C"ZIdCv%+gR*d;孢sMGzE`5Kv7 ,Sf{,K8O3nbl/Mnn-1$w3lׯ8FAZ~s`(`\ Ns֢\ky\Mmϻ/˴r!-ʃb|HR}Lޔ+ GZbj\zo$(H߽gᖬȶ,vHm W,Rk4r^хŝeeQ`0,uY/kA={ovj}vL뉘LEfVwg}7Om ^^ "mJ 8dkz`'nO; 7/jOxQ{N9gb_ɌRaOFZRfelS5:Ry URUceg1(Uh-:DSXiKz񒖘y8m%;GFK@?F2Fŀ03h#|I~>'wkEO՗GH %Cхe,ƘIlR@uI 2ltI]ZwYRBN|=vW\p4C16fz9gV]xwc#TL!EPLE *h. A0i4I6d c؍ xBǹqs0=xC!R HOJ\;kFx4m,=[GzbS,ȲX*QϦBZ~6Jo-zKg i?`4Iܔk3g Fz Ydl)04A0u,s1_BKԗ/([ol$2=s>9c%`e..2{ jS)0 YC\>ǕJ\d#5eNa*[պk'd)=WH32r#;I2H )kc9ԗxrL뵎BK%g$ܱ{c"pf i@vtI?k^~ T %Bj :kݣ%~JQܯbܖe'+:q^?e]jrr '<>>?1o9B}5[[9vϓ1,D{K逯?UQ8%ʖgA:#Xڔ )β 9ZC 0.0[N@̒ j-aI81 Or&)EÍk3Tmd֝Wxm%8 ]e,  =*cfb 7y7O_&UٔH *0[*9t@axJݘ_ {> &CRؔƢQh6mǨBr!3P;VwG0TPt j;K(%E(XsYM$\2μspʹ5"ITQ|tݬ@sE21CȊF,aML Hq$|YGdT'kTxrpOǁ(X]DCw", -ppI 2&E%T" 00DF[8cZ&i%rLd (OdJGrM.*[l .F x2uu"wN?#_#dz"km\!bD›gI{H+ }'b݇a8LNv2`1:%ɐm%ϫ档)jZU2|uUWw;EIij/9aҰYo-8хǀJ(Zi#y "th('XXfam-H"\XGR坳I&ۓ]\:ۧ{7wMS7 NY\eX G!H+~WG|C I<{q Eu b%KcJUeKbnLӶ$[La0&Wlwx+gge&UO6Fk"] ` "ܶn\+*WErj_9JS>92a ;6̪wLmelbIff?px_4U*dtU|Pqv"k=E^Y5+ ɤa),At`l`NǦUB/_#) ɑ~2Kdln8]H }:ů!դ7!1V|AҘ|^ _l~H[J VXS?]dPb,0LenP>Е}?m>miboFi;a$&k{n\'ev }y=Ϧ_!ŚwgsAw D"ѡpȘK`mPq 5A`jz=~ X }fیx>&:Lܬh7VCl}v5 .nݦY9>;˳SC:uP$FţЈh88AzΈTF]-XPtN5:4ūl8!wRx (M5^L΃+)BNML:!zxQNz|0z:f/[iLQR}{R~8iM8JL#E^[CYWAw%0XzB[Ԧ8S$ݖE;=nyJ` r)4T5G)GI‘Q!-.h2LZ*ƨrR ࣗ**lJ9p.|8"He倐"`ettXϘ+!I0ǀ8 R{Uej74m;VgЩ0Y# $zˀ+x02JILV{~JPz<͔̔@Dfx_9n 9ny&0ǐRLuno!,qM^1%lP6\'1X93cgmI %ȗZrV:@9ȍе[$djW0U.),!IUlj`J#6RtrVk/;@H~9XcBԶQ>p|/̦ww˰tf]|Em(D}2 K|*i(n \9x.ozo>0} ||>54{@-)ߩ.ĴŁZj`$+"3TDB&R阎a.N5ӣ&%xe,:uy?d*KWːb,Cjx*:@ *T߆LzWA2A0ɵ r B($@b?33cĜEөZI}(go@jOVΓBd\?OSC ~(^lR>z$QXzMXf_$I@9ѤʋPG7Į: 酓W-u9?I1?KbFO)z_q}dΥAIVJքE@ 4`VxV[,WY-ٝ|F𠒊A~ ,0$ACt- ywȋe~.D@ےOY$0_|Yb>kQ~6P󍚷5Ewfד&g0A.I_TD {{sBp'@'E#ɗݷQxi]ה_~Sy5ؒIK5/AHhA"'Y|ΟxW Ց?Ӝw1C,`FxHA8iD-Zk6RP$Te&#QHYu2LwZJ"^ˈA&Z iIVUSMSC)}i߯[h3]r4l.U]:c=QŤO2ϸzulu5extmW]F^v"|䕞d4Z3޼,͹̃R_}d`o@سk׸r.]7umj4L_M4lJ|&S"}k(vG6B[hS(IXFNif]}?:9?ϤB]T*A<G.wᾷꟻMsUV-w&]y?wᮋs쵲2aqa^!X&HSE57V"FMsFV;|aJrOU T7VY YySIG h!uch):xo%zCgGD`\c!W Z\%(+D`(\%p8rt\X3z9J109"rsĎ\%p =re/]eH48&rs\c!W }u=\%(!Wbí1rj%yvgV!WAL[+ծ[ xɩV3W=i~^9eu?>8+^܄VM') +RzJzh[?y}1VsӼKLJ$ 꽝yJhKrF㺢4*^>{IMu~ekFVHTDJ d@":DC ;g1"ғsI8>"SD$p>撠eЙKr؜a.D`zI8X2|4*+б"NZgrLtK\%pѐ+Vg*AIx&W/\1GDT J2v,*A塓K$Wr nc3c ')I1H^[eJDG")dRD8n e>NZ߿>t ݛh\|ӛĶ`ڻs< M>/5M -03-YB%iÑ&Qʬ#RmDQr,.">0 aS NOtw%R_59Gc ZA/'(ezH0ZEurM\g,ur\g,v---l!Ri-1.2U3:cX3:c/ ΅{r2\'Ʌ{r\'Ʌ{r\'Ʌ{r\tsŪSS.ܓu\'Ʌ{r\'Ʌ{r\'9d#D*LuT=5uSU0Jƒ(Jf/v:m-Rvܟϸ-żyXcOM:SN3Hq 2x8J!ֿIPF˖u/}4QjgbתH1p_`IA*7|g+2F bK. 5,cHJ@D" e8>l"ohg'ӌ5CY Ί?Yk'dN<,#-ޚZM Ug)h/MsVmqivC(;N5\5 ub9?o "i.|)); dA#L9 )(ԝ4\t4 tٴVTE-6`M@ZTAp)u1 \w Y]P[/ Ч:nHգڶ**tv֛]{Jp4&l0=u)4NXM6 >q #L/Ũ:|WwQwO5oenjyzU0/|qY-˵]1/׳5!6#31LWmӐi$NzX0U V0bflIZg%hl[5ajȑ_aev7ZFbn^{{sna@&mTD(Y$e (|eV *"AYvW>R;<|~Sm,eTƑIt&nT -74xLd~yDcp_~yR_^~:Vʮ&a ~wD5tm[>ߧ{+dvL)w ߍWlxԍY'i~f7?]V٥EU?߶4!Ի1[Ƒx) Sݻ7uv`_lG[uks$HFz d(69G"Q].R;CٓR Tl"îp{ÁhUe3M$"SJBr}Py'RB,аdOg/>JZM1qhi5;uge$\?tMV i;OuCv7ijjKm.(s赹J?bm.4sG+$qGnYF'pѢ(5ɔ.Da;;tc >+o ]:zPჟn(1Y&kT%(0)Rrf c,oK&ה=Z'6z%WdvmПRܳfu 9OOەjZa9VuQN{2Ht4>g~/L7D'Cu @w& 0Dё@HHDoH{D<"U6VD56yo2ВՒc88o]r,q4=Yz1~`=AqV|;'7\PHT ~.3Zm]rZi 9JxZ1=Z_b[B\h5]P:.uUJ 8?D;dt$u1 ¥>raķ?v˻MMqJ]tq<^/ѭ9sʤl=*Hk&C:)4$ ԙlUZb '2#CiEΞcMʂ 1pDL&AL=JCǎQQ!"qL.'~YU[ 8#;7doiXADmOqek y78 шNzAStҐBj]̜{(]g[8!u6%E8.!.qqz֑ FZsAdB g* 2) RHgJ*,txg?Lj]jnBRiėE>ȭb9AiIpCe?*^fAХh?YGa^4aiJ'yjDv:I?):8d6|ѧI`?o=Wm]I!{㓉G;xL4`KLg+ XfVEZ {K94s v85I鐎]p'cܬ=xvqugyD4[J|z粎"#X!e(6濋tL'iaztgG%_|oT o|U,JnNO1Θypv[[ 65ox[[zAFugR:! >;NAۻ̖ڿ[Ydqu^eAlJ!!s'H?sQqhzQx}qZ<:-ݔOl^̌# uɯ,ZηmF/`mn'v% ь-y%y<`[rK[li6"Y)XMԹ\p-+tIEjXctPSjzo9l nԘuZO%|𡹓|ɺwB'!rk"4=7Jp]ʜBcNxQ(4"hf4q`fFЀWE~m#bqv=$|̱862l9<=9ic0iʬxdZ%e&.Ádj:RjK DeL0&+$w>?g]zɧQW*]Nn=K/A}6E CJQ zMTrVk/;iEKʮ%Oݻ_Vr'Z~8?Ţ+_a 8{F ʥ f*tVoC'W(> }=6>w5+ɾjh=mCvgw{k%`My#pj-BZd5jD[_v 1-1gZNaQt[0[xR T^T~ ^aNA t+xe?vo'a?qz/%-AvNg 3J2(HV/+;%q\j!'CEK4O9aςg,)֗xu'/|g/6c1(ЛVԕ_̃rZjn;hZ|x_zOvbRW@M͆b_m,o' Qo&O˓}𯝇^SŒ69j__GU< V% aKAC=Rz.*_nq!E]۞}nC:II6%[[Zemor>s7wjۇ.}'R.'FBK2UhfLͳM.ܬxԐ`.ShCT-4<@nNPmJzTiK&UIZ ߪ4az2 PWr.fI\XFX@Oa). 1@K&_ ,ߞ2엛9_jY۲=\XFCY'3 S? j `d+p5V!u\-I)Qվ&;=єcaWr_F'W .fޖF\2kT ae s+FtXuϧ$-3{_V}1lNTBYJҒiTj Cmu+$p|h8hn߂t鏽Up]fnm;ic%\cN{ R>=@:)}S?)vU^-OkwZhE65=8uxV:s&Mm-~fΆY?mu5m^p]pc&ɏJuҼjV"b)[H2)ʛ#{ ;Nδi.납?ZP1)UuN;ܲk=|]fS&_ o9VTVuJ VՕ l9a f?7.߫k#?֚WNT9Q;񧶿*.Udd1Sx&S Ub]6@#_5ؓ"E/[=ihUGNV,4þK8UR1*P`3)9U[Ci>zkOh̄j̀~js 36KsҀQHuluxEcZ!m̳n,AØ|Q$jkي04f)UsPM186 1&+;o3X~8Χ옏l'ʽ=Ȯ]ߚr} ܉ %ĚP /L?@^"r*IZ=7IZy_O /ш \%qH˱8vJRR++QtL  \%q:i:zvlpziNd)';Jr}*p*IqW8Rޛ nRc7b3&{9nƾֶ9DG")dR9Q h8|99 hdl,8L-72]Iǿ~oiZ>҂AѺ`߻ᚬo>] 񣹢se:,:Fux&nfs 33?ݓ{gH9jjJ%/~^ snR;oԔUtMw&f>ԑg9 hZ4i{>2Πb/c]w:'ütttttttt^:!x攊LBc|9ƙTrW1ʉ ԈRWU?N6,®D&Z3 +|bVK*RF6R< dA+ƍDNPEEp3-&Z(D0Lplu4D{S ΓtƳ3`A1zy(~ʣ: iQ֫^Ze:ʨԁCɴᓶk~F,B0#BXYL^jʈhA #(:2&"~fg;RZҍ||%fS#fW--u=Tߵ󥹭59-p!}݁$.-alYbb*f9k`}xBr8hHz) ؟f<5" 拢8hĭ4{)6#]Q %4AZX,vGEo_> ̂HB9UaaR"2j6qjL1`hiv82pqS=,Cme%w]T֜U˻5M܌$A['sUM*g[jJ d\WW^DQy%beQbgҝKyE:(JZ-B]JF(%rJH1H%}{#H ,3r#c6q#c> iƮX3cX(mvjrZ釓NL]k] 3#B,u,b4 P,h"KB)x0q)y 3 j-$& fh/#v% ȕ9rg;bȷ\cAlq,{&WyJ%)]41z$F\pΌT`Z T9J#2yF.qVqCfxю Xr?#ȁSɽ'MxX9Oa2 b6 ʋM}="ss4J`w(å*2#LK[((XU" 9gS4oU,g($^2Xo4( %OFbҽMg;"޽0pqZ6uLkٴdW\ęq&'f 0@TpH1a9Dǀ<N!pX0v슇$38Ҿ&͗,\Ws?0*zG?P#2c=vI=Stꘓ8<9;lH #y@anڀÌ$^RޒaYkޟ<ĩjR:#G;Fjh׸M)l^n<"ē/ H}eSYGYo}PT^k xFO9v*z̓hSH VJ{9GLtQck(r 1PEQ  MXXOtȘ((+0`"{&x>LIY,-,xGI4 ;U6q:d5Ay4z}w﷛j([][Pbs5KO5=O<a3N"t$IJfg0%}0^=n݃-yJQ$,XdYII,^*J]6|̨8̌19%Z9@ Nq f2'%8XdO;g4]N4N|mBiJESq&ݤ=VH4O+'u/T?#G5/k Dl+4GQ`sqL%VSI}kRTzZ4 &vEm{v}@x0DPI77i4S+wj|w54,, X3&y~dDh2Ay,FDzzk~kj^oK!ꄥԩEE5.(p b?}'Tpveo pyK8Cv/)9Rqz;˗{6u$ټ#IaM(g/e5b332LӆwԗFk",{[7` w"4n\ež#W,,mj9J]>5* a 31̊n'{mIfj>px*(`h,NFٕ*NŨWMz V4!YkǡJ}w"1=;86GE ` )i#_y0tN>% raf>nkbC" DEu(_E6li" i?Jڴ]ڠ{cVǚp5{ZKfԧI*յƲJhq&O+M+Z8&|\p~,EGsdQJ+"@*OFW'Nx;8%ȝt/zϫͮǗr%uNg={ˆKֈGSN^3e}apET>a%,.|'LfmM~o=bl(nm+#j^0F zZu*[LSo{UJX''Eʳ=w 3$)Z30S/LBA\Ek("KOonN7y}ikkc$LRu>h0@S{ɶycNxl)B:0wk:#UE;Nxm"u7_\zb)?]s⨇uHGF3qpN% FЀbMt魬Nk)ygKZ (Nl8!wRx "M5^Ls΃+)BNMN鵢xOf|4I}P#I..,k[ɕ$ˈc`Y T=*tra5E$W/4;V3Nh%GZ# $z+x02JILVG9v;'5Y.x^RGɼ`5+L A>C[s/S\z`7C⾛ֻbrvwWV'@s,MƃBN{ɱ6$Qs-9˝RĀ .C@ E#Ck2{n3J;'-y) }QbOnҍ}%>IV[1\ ^i\ jRzxi_aUDz\*Wyl|HnP6dQR> l%x ߇ da =>MS_?W^$Wyg#t1oF^VkSo-aڜ@Y9Q j-VT/I׃@*P p*ě.Ur-c1 %.ZT<h!grbT=XTLX_.je`k.PHb~fjg>LLj9+SY$Ay;Dq_j:l!t'C~~ jؖQz/$bU2ӹHxOxF(oyXRv7^~UUK~|0ӏ7"%!\&wO"Xrp>1K34jF#OLP'arQP&ʊ;68Pv9syuZ"W\(P\H0Ce$VIh}4li_A˟k":Nۯ 㚕v [wPOT}7+523 =>+LL,KN29IV|S)Uuϴ,4ث /?yނ˩GKls5rm F ~FPTKy3vj+oD&ObFHKj#eHAVwoLG eZ3 D佖MFSҒ[FΖۭGaXKk?oʹQ MWVx+ r-iejD]>6o|b9%2%]θ˧[̷] |c.7nxxCuKOh,o󀶷6@R:GzV9k֟_?C&ib圿*[J|P"LC +olcy؆T1Qmr%ɵ6)1W?NVO 9@lJ_܂8_Pyftl+ >,}/U6>t&."'^q<-Ff7S yYpLkj%JEB:1LY9)u@'LsXm&z AIR7.)`diRnwXzFGuIP@X3y$&tġp!UhB'f68. åh2Xb)p Q2x&rc5(@Er%:,t5/W(j #A!p , kŹV !:;"q}B+V u  Jg\OE]nDOo/Z6Z G@Z*#NkUNƅ?f <Β2C RlT0F{0O7kk %\LWNSN^4oK30W!8rESE~~s=HǙb>Ӕ*`@ E`Ntd:x}:*aUiC(ش,,rH\$sr[ߝ牃pXo#b":ջ wJxOU{:=ͮǽK9x8pio6.}f' 8VM¿nN\JF<}StV>BkcfvY]x q68_\cKbl#p4fVlB} 7Af5HcO麩܍,3GZB?QLV|4=]ٿ|DWZdS ߪH{=1y?4d%'8pQ(^ӨGXX\| PG~]_w1QooF`\JDKOO#@=G]w-P[]CuTmm~u5~W`~ g& Jϲn>r=.QxWRC ̾ԟfjC_]B(r s=Ҍ 7tzVv* >>lH KBe6zq{!.#9et4(EŢPQQGz`CBz~;ƀÑ$fItdS #)D̄&Tj"OY{=W[ieMgf7cO[Ů=ulY||}Xmym;)V6rv96JQkT V0JO*[NrOGO/k?Qs`N,PG8D /jD!фhUt ]/LijtL2[qפ=sp$`d;C^W bc9-7h4:ztXdz> `\_<b!;R_CbݧatR$\R/@{QHQ҈klX9aLU9UrN75H5OQ]A"kHvxBZ VD LLBQdђ|$Y9N!6F*MڇD1+-P,JDI;bQDHEu& LY3[=ހ|ϏN$~/'$C}ABQ*G쪊kU|*%lvvF7Õruq3 XloLoeL`So*BIeAR+l{dJ1I陡7AkYc(eMAR)n n۽Z=E}1Z-_^՚kZY MV~XiuݖޘƗT** !~ ml6e] nH:5bP)"LJAKF[Ǣ5ADrZ2ĮR/Q<=ߘu`Jԧkxe#)Û57M͔0VOs6vxKjSMx硜>ns37u[s1h'k ݐVR()XM1E!$j &L*BW ź "(dq]'}dtԫ1b^:{H?MwCRFycb,tJ>iJ5j]VY*;VV x;19zI&WFEP!#`o<@pɥ|f? (v`&k&J̡^>I%p[>zw¯ݿz^= ~iҢޏz2ܕsYyD^Z]Uq%j8twU[tW̫~xicF#w ޭߖ.+b:)Ѓg~LF׳&|ɬw L3'wU=|\wWo] *1⮪NRz׻7讌퐏]U+WAWUZsJ}M++AWajUx0V䃭nk߶vRH]j3l~*_j=F- I(VD:ߑ}'ꄿznQfeu5>n2%UEod 2>U:k, ~*a"+=AAj os ZHmF0-11!_~Krrq}>x7 Gxǩ2Pvnh~"jq~_&zRL75*/QJ% U n% |޾d~def0`u`Q' >r,G|(t/sہaPvgtRחE nnf7opIc~Af{Bf۟v̷?<(řWFnևh|[i^6:Ϭv]vGN1SUoT/n۝l&y1qe:(㰜alq{i6mbrOĴ=7g4h>~ߏpuNU TOD a3u)=Wi96ؽwy;'x9NgAÖ,l?^%-ο?Qޗ1bo/Kpq}{b䳶uis3XJ WkU-x! +ku*^2Q*B:ɀ6.aOr|Ti 2,mOaDa7´z?,%TrXBnLj'.mjz@٨@)߫|P(,YֳήyN!Z'd0_o &6{!;:*FĠ|6}cNT+/lWzZ%`j4J|秣ϓ.8Z_!n__"]?v%bDƜ$eAkʈ *dl$@ةrNF! %R SIȆ@kM%碉X;>rHƢEƂYB`͈h gQ@"]VOk ZjVgo4;2=_}l gZ MYD[ kr 0-Y=(Xi ƣ{[nkmybv kҔW.JBWy~%nmz m+ >0[ S /i}44i_R˷V)}-n WW.!6/gxA"4דU`U'425EeՀhΩEA0}K֘ʂzr"L0xyAFOr ;YRڹW(9/|2Ȣ)#(- tȌ+'<#AwCYݮWOyy\_؃ܒ&~?{{gr82\ÆgOK?T/W&zܲʥBpكz54z'#cT3=1@-^ @vU^)~zl!c]$ \wC!C!QAfcSUEW^1ଐ-.Gmb u2;e6]4PtrF{P⬵ -I;hQ_C(C'5ҲFzsP-f !B]C=~gGӋuնiIWCsKhDon j(jDd ={ svs"j-|mAD҈Dk$DN01h3.lJnhcڄ}Ic z_@6JDI;W !ShQuy1)_:BOp,>sj|MR(I41w_U * *EYt VxPP  ̎DG4IIƌ1hE {ZA7k ogbzԦZz2-=wwEښ)qn6O׼Q I.9FzqR`""=p^D\e{@}_DaZLYfC LuJ#ؠF*-ՌOD뚝t dF,J9adI5*9rRcڙ8:.b0K`4%yMB+OIbL"H YxuD`Cbȑ,žqKABxy>;J$1%YNEPȂu#+Bb0"Rlh($bmEJRb&ENa Fo I8射P)țG 谤|]k1 #1'22( ,w$5:7?uc/ڪ >p6J#*Q^ik{%;X\-ͱRKvR :a|Q˗UDIlWq*QcP}UggY &͜}J&RBLJJLmy2 j-@D&s_޴;>y,w>(S~ʔ+6Yb&ְC,(cAd/I%yyvھEfht !BĂNB<hNVjVu\ځpOYv?a>nJ`CNZt\ђg+X~K<.JWZqhJlmx1OX|Y֯90̃yعA;qʁktGUE?ü+sWCq3\ u2'j-u< iDWN?+̏-wVU%z! fX&PQ_}ڼU/} O/8N>k[ 3/A]ĭFW.åI8q^-/m9OyFkSwi>z0^WeّhVُxk}^ kmVү"?g/Y ?L&l&v70x5#K$gɶjl,˔tH-YJblY`<|9C[ ($XkOBy[7#[T_hCOj|̣Wxe9гu|iBZ{edw˗l] Y6;>?N>7}9IdxM_dq{(#hTNhl8^|=Ë3Tǿ"O= Nɯ?8sک66TPay)5욃اkYߧ_S[|X[;CZrkϗߎ"|bYW>U=a&\a?A!7MRx]*!7硆` GA}Γmͽj?3inDD~>1KA`^ˠ'D\ԞG!zeM'؛ Kȫj W,kGLSA00 SȧsZ |SLg])UL'V켌K iUe4QJIƏ+Ds񕔧%v"UTBYrJ\PsgzPPXiCc U6$$*'I9?$Hθ< eq$ڤ2٘qAIG|͓n$sP",`*?rۥ[fEsrCűgGOĎGcG[OOӯ(&)8kD!9W3[)Jƿ'G=9zZr}P7z4r˔ *V<_x!U)U6j1q:_)*o?m<|?6_#t2ՒZ"!L٨]T*{2.hمK[xBmj%+#-o{i (ߜ_׋T6,U~Q°|*|U&-GH4 Nkr='WĹP>@ZmmKW~AQ?۬Q?CN9R綄"^݅P'uFQzww]l-NKqnS |n HRj$ q5w GH(2*99"!qʰ!ǂH1(T J#c1qȽ],b)cQp%Q~ef-۞/mǻ=̾|Uq/h4 gNGs܋D)N$L3E)eHC:ЧaΞ pEWMW)43A' +Jo.%݈&bviGWP3]52lJ%Dk WTs.\Dl$ށ=ddTyԈ5!XdǘoBxt0nc1qf98qqV5եbZr(.¸H{\qqgȭW*%JY$EC%Q"{A"D %TI3)@7Qx \ ӎCCvVooke6WwD%rFQG=G?>Q#gTa' TAq"ss坯 VaړOju7%%yf. h+dySD HE&xc;j6W4\ gqD5~9hwHm-m j<~g5G$dlhII!Hf pN {T*l"&5J ^0-d1gfw!ă6\0(3Y7M(&Ξ1Sg} /ԥ{C-c4t P/vg58̉_/sgY+~sT_)W&DHJRzHd,9$tqFpDȺܨqxbG8cJP*_Ƞ32Po7#i!Ι~!g'no|bw3M۪q8<_F"Ybhfu =Px*4^sLjH\923B=#KC*no,>$BR(aLHLѨDOe'uGYO7ۀ#dz>=",^e&cV9e$IG'7BTꠥtCoŬu;dnSݗ  %ԽHQin!  yfE5qa< Xt,c%1,zx̅Lk+KxǹcB"DHZw$qi#a|eh |{%\_QEԢgI,srQpsUHFA.í~zJŜ׵JZ;!U#CA`))}MhEZ1e:vbׁ7r{7?@ȁʋooR6樬nA,i`K;ljy\Ve׸UǛMBgZxf<-n [Y4`$ý_|xf[O?F6M'Ɔ2&yGO7;1+-:>N 3CrwvdswH8vݭcmn?s7]*tA۽#U]^j橻{[𙰟6-|PX\~ҁo:\p%3Ck6+,?޽f3:2yLȤ aƚ\%z"&o(+EnAXHRd|֭8C-t9NaYox2\D#RD $xE,\Ui)㉂# 6jxzE@/kO,//:RndQw$MxY^yzy9J9>sP`9Z0c⡟J?VWq^}SBD*ι7P!Uyy)x&/K?R.3J){"~uȭrUnc.R:k9@>.!&ji.S;c"x'PyGuZ } _]&y 8@^bM^ ۿAQ{}o>8\[WjwC+Cw{_,@*h4-LVm_dm1owTXh\l{DL 8ӪQt8OLK]W .2L@%{+C4B:q#%OB $K,))",!q U<' 69ӃVPVFI@!װ(r@7B4*QǸ ָ yOu2O3ϖg 5.w;o%oxͯw=.]w~u _qCm $OW |5g p7/>b֡Z9*gAP˅NT=U(UU:hS$8ՂƕG$0!j%\?Qs\./(%eK빱1$IDNVEHw1q!XD@Z3jGɊg~`=|j&䣧OD13g4Q9djk' k(@?L$/*ipKt%f2[M׭;G?C:A?-:&y5dTF'kbTy; R>Y1վsT߽g6B[PYw'٭8pU!J@('w@eN8')^F\(•D͆DA&[YDC2R0[&H]DKfLz-Ut=bm`dZg)3RϦJ]WR.K;_.Kit_T3{NyEB<9\K`~.=\OZC%[peWC2zߠ<'qUj_g|fnMLr Ɛ|#J�n0Mϱ(N24%>jm~QR`fM>ݸl dAXdo^ԵWUN7C˼2ӁR>P|:*+ >`qXgk}Ҹ/=P?Do~\ɰ3c+4Lpr$9O)H;Q5e+N:0{CW WRJT;Pktut| Zt ,gJŨ+EiJQ>8záL KȰ] ])\/K+Ȇ>kI `R?T~])ZgeGZ8]yЋycBHI^v /S; vCK:t#ʰ_Jj푮^2d}+P䔁ս΋O;Rb 蜳ikB@Z7 ed{i1nAtECW 7RJ&w:iGG:@r1ŐDW^6L;};{o]HG:@d_|`gr.-NWÑ$ sѕua)th])pC+w p2RQWiJQٵG1t_Ϳ`kx1tpb-T*1DW 8YTU"OWȒn芞9Ƙ}u p|Kw+{׹-n(ٳ+ځHW/z"bDW,CW -NW@<HWDWӃ󤒮|JU5yoj^hƴccRZSnvkp }yc-W-G{d͖O69GjM6o~Oߨo '4%%@oo/#1+7~{>[%l9XsVkYB$ΔNy^||7g=$!>>~uowgf`.Wg.goGjL[G \|F$k(>?-2tgZ*Ѕs&7fJȥ4[aFחcGqIC#̷;eA5}0 cna {hsftD)hCwD&HQrU F"Nh) rQ%'hѵѥx@>]@RM"|bLft@&qJJz$8wcFJIC֟KOAh^a'D34ƕjh+Wcl|h{ቈ&3'FM4Z@@T1T݃6d 9:DmcR ciԎFVmFM#dmR ߚ؇GRHQ"^7I& ICs:ϵuM 1WUZ:NQ^dNw!ZICާ$ҬL/m>`V H|0ab :TjMZPg‘.LCA[*K< pjVz bMEw1%t4GQF WP! Dbd dc5WVcG=D &O 胤' ^=m/ĥi&1fPTԃ*NzY'\4'_ } XymgU=+\3M"^~[U0oYF].0mFL`E!!ƻAy>o9@G6%.fFVX5U4XQ[H'#:CE7 VmQHt(5#!U/xp`\3c]Sic1 <4 - ޹-xYTAd(P5zΤl!&Sp HA|,?]Ɂ{;Bw*6lLVPx&hšWd-pڠ@4XAҬ ֚65X k̀z122[=/dO 4%尀h%#p=-z5XOQp .nI[ 1xx@Ap&M Yc6\WҘ" kЬԛWlGR!.OS,(Z@Bׄ ;/]]~Fzֳ+{-F(Go{$ ՠl_ 1.-tF UA+V*E%|9?>><Әƫ 3vн_/>\6z]_zw}s?\K|{]__U_^NjǮ0w@ǫ8?ׅO6mu7PGο|\bOoonN߾%i7y'˻$s{=yl|z0ӓŤӇoWȮԂ/moO>3q֋ۓRo4<ĕp]io#G+Dc, `lϮ1xa)q"e[odTKT%V!f*2"*_&CA~xNmV|ٝ7VчIeWrhS\Y0I Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(BUs{x Q!.jX# =Ђ}U3e燲+'7_4=PpC 3!W\G\]W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W'-b'b <. SvGppewW;; PS\% Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(Bի vIp\ TwW`jzySW _+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B PpuP]Qf{~Ե!jV_@CJq7B@/(M%}2D4чZm[V>a_4=ew_V T{[xT[u媼˂>;}33ɲ0+,|#r({by+&дzFV`QQңv(-sIWl~~55K4]x \Bemi@iwHӧCL˷z^JW3tp]+@i;]J5 ז!Ղv"S%"] XUktW jz몠dJbALEwbWL *CNWӤ+UJ]+dwtfL@iC%GgJ[A]`ʺC]Wgp ] ]/+c5q \źBWiz3@I))ҕ%g֢3tU#;hy,;3 %+ހ8s24,ȱfp ] i#7ij~x(`L Cpij刕Mq]PRwM~&0$,ru4o(ջ$u#w60>UThYx@+ra.07/b婢gRQsOi]KrVFz!rVgzYbGgs*Y6,[Ej1NNl藒jڠPI(74O+mVIYPozW6s+ᬪ*;LE-k=D>i6,se*Q+ʬl ɼ5[u>4Z=jb$VVKQcɖ,^lg263)fϦ9}29*άD֬Z׃y%9g|[;?Pvڣ -oZ/ ->Pv'Z8'F[!eO(*[BTm[tt&t%(KUc]t{9] ] ]I*V+lG3ގZKD骠dJ|DWXs*pEg誠U]#] ]iV\]ivh;]\%;XlJ3-3ЕaR.ŮIw;f+@i8j$N,3絟3XUA[fЕxbӋ>ጶ 07+c3Ht%oۺ+рs>;eC8A$o })Mt \X'4C6'iMhqi!*˷ ]^ bvPZNVG5\iBWkr2+!V]`sW **huZg#J+)]r3tUXWUm+@ier7:ajD:J!EW]I6=eD+ZCAQrRjW7(Pr'ޚ3M`gжLm)Mq[Vo+ߨjJrDMnj~ɃLexQ2iBQHէ5ՠ*kv&]:Vf*WXwoa t6B%Agm ѳE^u-:pw(aCS Adf%yҜre۔ \o|̠%sywGtX?Vzݑ\>t)Npߗ p]*^W|K|oÛd8T\VE<7v*~]A|Y%)}57?nnMJa6f"4(&E;3*Yfqt.+ Rv){]$땻)>v vTt=ZIS.1z1"|4DomTD٨h:&pJۗvYϿ#~vytn|٭a~!YmW L x3ʺ3D}=_]wE[obVaƪx;\ݯ˟oFsǷHkӫWKBp](Fϒ9SutÛ]\ %K:YjPmyee2:XGhGS\MHW^@WI/^0$_՜vʍӬʐ$HJⴑAI!"0d^ɩ&xO3R-CP}OKar5ojIp 1 |71k>cXXo#:!NޑޱdLj0t2^I#e!KAˬ=`3-7`G`(n` 95>&yԖ 2eV1'Xr'j"8=x08ۈ5[ϼ ٟoZd"r :\?^ۓսtދ̡bY2;I@vI‡$-п`lÔ Cg3t]8 ZdBBL"e0R>43%1z8NP xcu^Yu\FǎMGgx禇]\ߟ|-ئ];=tssq\~sOStdY޵Y} Ύo뛚[w'{>K>oAH+5B>l-m\^R9Uf ,=[Iu۪=ﹾ_'Ҵٯ̯=~ݳ24jf1$}֊:0Lydm$Ҭw:,o5 $  E K#&9TvߡӓB~,RdVh @Hk낓$Jʝ}d PqYok$Q!':y %a2\">EbٹlY.K(V>:څUQlԦ1=Q |qCNeBrf$%\eFkX,\Cc'to@pU"0xpାN8]J~|

5L-/IL+)}7 ?@4F.|y- Dt/\Nm~YxKAi7K6u0W|J7KWH}Gl_?26Uñ;~x]:;Eo.6K^9ߦri d_NIsؗ ۟{,?i2 jbŧͺ]}T$a5}E{)#nTN?h.|t'?~~@|xw~-8F«M",}EpoH[f ,s_S7wQb~ژ[O r |ߟ4؞x䃖Ūq? ׃ 8/}E-Տ7rI*j{>qd3?^ħud8s:~N틭*DA*gl#t9R2(Iyk;QCH'|d DO:7fWճ1n8<n;pFHzI"4*Q#=$׆)StN+AtLr@߻3Ӊ}.v<#N=kr2y=YUӸ(!)?X򽕗zOLRH]DA%T3[)Jߢ>%RDOqNbpL)(%^$dʝRP$;bxx_Df!sp^n؜a3LU̩ u@քHx&*Sʀ.*%,K*0: !\ҶE&o^Hi T<^ v\a+ξy"F#gT f?Z(PϦJs-RKB k!0 /lwa)s/J"- z%ؐq>\6J@h$a(6dqG9VSrt΂qI>۰ 褢\pm"S*O8NzAإ~s*Ѹ q]ToT2}>Trzڴ+)iyU\|Ykv~} {:alMQom+'3(]޲<˯*ƞWh2W=V_M2:חUR 6)RgP~%GDQ*qhQ9Xm9cSBrls2M 2:Fo3] oNy e[#hSh1q4(Bqvb,qˇ8sᨃ8Lm42DNNߜ|FCA߁IN-ue#a9*9eLwIo!Z橏$ьK r28MOb잨d\DZǕlqt},^ޫLuֆ"o{Si}e{mGOyl8ןKu! S .n HRJ$ )p5w GHePiq8ex!H1(T J#c1q[P v)BS ƒb\ľ75!oOM~ӠO#QF"QmJ*sb;I%ٹ'@D}hzilżViHYڴ8 س\iɕJ8%f&脶Aa@reC"^Cٍn8K&池v186jw vƃ!s@O6H.LR5+*Ԋ9xy"lRJ[@R22C*ЊA`~,w@qΏ"VdREh$DX2KXslsİD"2vGg:WqOyv,k>}xD"$TyZKK!Jq"JBl:rF(Db".j|L`:`Z{N<֣R[d %@cy=dPNb(NכOo`=w68: vVcMoDxjo8/#,1fhLAzTh. "sB}k6F'Fg?g`Tjh{-Y唑$iIPݬ/6rnүbÜݗ  NewĆy d!Tn/T?Q8ƢcA+asU" V$%.ۣE h:;5{ZqM{-'Wamuo".+j%ў=|h݂>>dO\Q7ԵCoJ^/ivZCCAwH7JwIhEZ{?yb׎7r/^;oT6Iy0:ן#I{J-DWAi^R~ 44,A~:_59A OWuʗ.{]8w }^ B{VbqSS`;mzC7jGzߦ({5RPQY݃mYBy\慹>V։]h7Em,}۟gٱɾew?t8ϪaJ=Fr<3يmk6ͭm7z6߆џQo:dž2lNwuaӸm⡺o?LN-3Arwz`ZsHKw}9ܓ`k*,P"_ Զڭ;!am{KQh A}nhf7\Y2%3,k2_޾S;4) B &K dR0cM.=H " ,QDw)}>9.?0Pò_w^pqH)!& 4`G-RF-9!^GlԒ^@NI^&GO)ݸo2}$yw£Ǫg(}Pӷ'S<>zRBߞ p~23+Yzu`/,͝$Rq] ˃M3!t,&XzV2g)VߺO<סyZO `ZTΘ T:Df~G\D/qq]]zN7vy+B5]z~FAͅ~ӫoge_wP]vhphY#(8?&ws6ׯ%ּ jB[3͋mǴiI8ӪQ?_DK]W JQ&!TQC!\UJB< :$4O^]E彔Qh  BZ)l6yŁyN:lr2gzIlo,3B. cQ~Jo5.Ӌь7-=q'!{㺀Kdaco]|:/]9ۃj*i2U% !w|qt6\JYL2ZJh+焫A-;QՎvTHA"%!5<" QeN4 p(DIΕ^sQ^q.@N'mL9YYR"W#:jós2Ssb즪!,^F@Z3jJ<6F|j 䣣OD1QsO4J:y&"=̿D?LMR{Sobs?N~L8}x1_?D`ʄ8qbqO qG? i%cvvB )_HV΋Z2E8Js#^3WęfT߼ix7߶ ?OjZ\5u3mJ6m6<ogxmq K^ --,8wY_M{}]avً]5E;`ҀAݯ?7˱Os&yW=r<u4^!.{-oA z@_|Z\O?jKIT  ipNJ)\9FOxb*'X s:Bg4(ۊ]f|Tz-6\PFlٻUsk/K+Vqfl%f UNS$)?{gG" ;Zb 8 ~>$)>~Y +[:ޗj$DzeOFjӻmI=E6W:z3uַ~tro7&wznɶaCin\mn,WΤO4]bK"{I`YoC^ FGi"P(vXv/"?oùz];wY.]0zCw.ڕY`u;+]|(wz">I;[9v 9CJyz$](9Ti$7{q}8sƒ{شGՁؚPm$5ɼݞ%|_m~NmA\]N#}/x?r㎁\Nse, )3PIw9!łJЏ=kCށs14*ڲ}S^q `8SȺW/Xjj,貧b7@?@옉@”"ÐK.蝥KF`iLXh?XjFS2VMg()+B`Kл[.̣O8y%҅%;Hg`+킯 .qq^ë!9r Eп0< 40X\a9YBBxa:|WjV]R iqmyKÛ~=:Pemίy/Ζ=9׋;VT3T) -|3nV1ZQJ,sP-IFC3R\y+Rd=EWG(.FhHWQs]).P3ٕjוR#K15+fߎ YhcٕQ&uE!]%I܌S3RR[t*Xj0CpjFfjF) zBL!])،#C+RZOv])%_tuf_lOE9x](ptvčM̀TgO';|{{hWB+3Ig6cOS^g .Ҵ R+6Z5m[;->MEjHW Hѕ>k(])O7yq*hǖ: 1;å؊vf(#M=b /])n@lEWJJ)9ʢ#Uؒ" p;cW])m|RJn!24Ԯ)@32\jfhSODR^XtuCBlz,O/ڷ=RB}c݉OgK4hZĩ;.5iЌ ѴJ]6Cct`ںC{{~[WM~ӗ6ovzb!MvM5IV_O. \Q~ri\^ ˤMVJTB@9 sU&[/M(̏ Rq'M!⬶aQ+n= nӻf.&iMDlr#v￴,a$aR6%ѭB܏ ~T֨R{.MoSeH.gUVҫy { qꌲTAIͦiae|Ws*TD/ PxHP 'h .omjbemo.]gbcbCR`t܌ B+2P2.:F] -=P].H+2Psf$Ytu"-eW 5+ ]-Q2&/z]E"Ԑ XR3R\]mueqrbJt$Ҍ7BjEWF[veuOҕy2h̓A%]WF(u%1Dn)R`hFWdWF]2JEWߌҎUN]@ϻL4p{OřƮQReUjߪӖ{gOgO%g|uB5E#iH<#Ic+6Z5mRĎҕ4+Í؊6ueu])0v+ÍЊVv])e􋮎QW!9 !]03fg[UH~{=JueRWPG CiFW;hcٕQ&u ʀ6+M]-V+QW,ΠGjgJq٧VtesMF墫UҕNgpc36U])EWǣ+%])v+Ý{ism72.vt%;V:'_7s]M0~Rpw5vjwJo+ӕLЕ,ڻVvUl~zC:szWM}yp6r^0 Q3[L*snNcWeֱLԫ|VW) Fj/d'KlFWs܊v]%c"T꺒m?)>g4D{ʭ5"4w>5ifJ gFIh5=4+fpQZѕ Ԯ+Ln1* ?VW BhFWɮ6U VJ]EWϥ+ ҕ36+M]TTJ􋮎QW1Ė9+Ýu/ɴX}g(-:]qH-eW ̮ ]Sox]eEWG@Kٕ74YNEWr \א 83p+eǵ(aɮ]ޟ:Ti2Iϱu5h]Mz ] zm-R-zKg(YH8;܁67`djhk7ra~0qkeVJQbe@MSٚiuLδR51ff`i$ \uC P Ɣ00sF&Fq҉[[*cZ__oMI_XB_G}uuy_W3=?W] uXNz׮#J!dLݨZ/Q *YqLn8ǿOO!v9#B 74rjTpZ2ڹ&FIjAxlFWK]-uePWH6+}II7+$J)7\tu< |S2gX_P2\VtekוQrZtu[ 0ftƕVte\}g([tu1zteRMЊg>^WJ .,:B]qLt}32g}Q2Z~Q3xҊwD Jfteȭh]WFDcԕ0{t]nhfhkוQV⢫ vz8u,͟]MN•WšF3]Mj EWVǐfT3()S$_~P}C6hpoEJl;xjy Ja1iZѕ5f7]0%!] NeiEWF+w EW!HKcv! t B+2ژjוQ.q{U4>hȻ>WcNn +#Wm/˻'/֢} ۃw}EK{bw-C_er1|(Dwns_㇫WZ|y{y9#J)l~.ExsܝgNJ?qNIo~t6>w6E+m)7qܖ$֟m͛ 3w'Xl²k*]H>P~||< ~Q|?3_oN|G%޿Cւn;fOw_۟{g_VV|3//ԭ۾o72P<+2diL]FJc>XR2wƘG)? g5>.^?z_~VZ ~=n;ď0rvE2!0 'ўIEتΟ%Ӏ^/˽{Rzzއ]sϴ|zb{Bw0b>0^‰9Y^H) }eA/QDۤj~!GƬ8DѬ9H]6X:ǾhF|B$cWrv{G |%sӯȾg h{ֲGl?: Mkt8K0N[.E ktV*܏cⱓseԹb#~i- rHg'DgwUAEE'{'0N4v^nGDaDM2U X.JM ܋Fg-Kg kBhcluBGW>d Hqi FCF9uFuk.4˺ F@B;+GR6fa]΂PѪ J(!{*IUܕNaRT) `;x X-ZpUTiXYi)@fT `6+(\|8$Xar HM$;4T2T Xp6L:@@=-V(!Ȯ؁pYV ++5 e2aݛom0C @ mHיcEf@Agԭ9+ƒ c<`h&b1S|bDJpPgSO ~rPfIN:Xi65hlD]`#)#͠-kyGD)J6 e?B+mebEK]VkD5{PQR@}Ju22-(WjkIk ^h8(l,@E@H v/UTy V"t5_Ƞ 0(4٣۠ݶbF\*"͘u&"9ih>(ؼ($0D!N&Mr6&^]{Oj*贫#[ݕt>Hh.DVP˖ ƛAFaҬd :!Jr IJ2Ba1%OpHv9k=/3(XqAyjN$\Pd"U̫ FMʴL5}4,|OC |Y%H Vjx$!38hGM`i'A,TGW>/AXŸۆjbPb$Sa>v} tcRykD>"C 2FMB&ȈG!A]JrAjy1h"!%:]%@_>СLG.vk ) f*2" b&#E4Q] C VA;8*jV,*a!dى$P>flqR(jS,Ԛ]-6?vXI3,I£j4ZIP*Hx΢ZM'-=zrDu,e4 %0lӨϻ_AKCj)M.ڹ[k-+\\bIWZ-w/]̺WHN^ZOw_n[DKQjkWP`}~6rX]rJI?Mx<qt:Yoy_zl-v[)E5*鼚ƚ4{;uuyƅ/*=*<%z&p|?ᄀ߽OE_]?{vyP>gn߾5@A/}5yV&o{cE{+[7rtrom-jv' ׫A$ g=>>Z zjo;< lCPq] [3\STĻ3 yNhLbn(ՁM;5bC^ju=ecsu?O^M(Oͅ=vnǚq`.9q"©v0J`:pVhGl K\҄6Wo޹6&/fyZ7p:w[?o;*tuzWtm#gn͞_dNI e&ElB&ܼQSU'BյWצRy{?=oPv=W/׮fԾUBU읝Ӷ# m߫ܠ|hDx>Gz.RiF#,r>zzoׯ|I`mt"7OWזa =ܿw/_ i}k#q ^r{^eBo_YYi;xyz]g޵6#_1i/e^av04vfv 24j`>k`lRWdh[nt;fF[wz`͸_wo?؎g`ܫX?__K;y>bFo=0/zR:3J6usBn}`cׂW6*h0vA(9%ŹȫӔo^?LAWFW+mm|d9ݱ%o2Ζbe HDWȖ6P ZlfB2IP,)鍌 OU =z#I~$ >[dRۺ %F6%2Y]an#Yk\¦dJ<"S7mp-M\_vg3ن7y{/{{Su[?}wwawOYާg,FO2RV֭Ya{ _xٓvA{]370*Mkp7A~}ڀ,g].E2jHUG:R#֑XT2i@ K3eǷpЮM-jtJSJkQIc+.(( dQ)9O1]TcLTiT20\tk_ge}K.bŴJۜb~x[LOx7τMF&?r鎎~#(gdC1$9t H%+6'Ԡ]*iQXpՃZ`pܔ~*PI=r` 䢥OceT*GLQg3qCl>7B>y>/{!e;ng:4BūM6OҊnqI(zҤ !3lkH+t-=|o)ɧ!Dc0J2,@9 ZY X_o@ Xޭ~}֣m=+r]1ZodY:^&RUIS:x/TUZ=JRme"Oan"GdyRCκ Φ2.)!:Ikfu Y %E g'"ĒsQ{.EBuya~~u9ǜ) ddaN1 TEå.$Yt &)zn&_;!L^ -/\,p G2h^0\L0HP9\42j)x Jv*jfJe ()^t]YJ̐XIF{&C9ᱱ6gjFusK8~׺3%:'e%>Vojmlz1%ZL߰;8첿jqL- ~NQyPK酆$Fd }9ckť)1PHBl zWJJ›țe JHb&_ H:Ji L؎4f+P, ЏXXx#Q}UeƧEZ\y;veG탿nˋO|yeN.TE*ئ@5QXblz9nha6׳MPHJ蔔V6uh!Y ¶`a0 tuo+q#vN莉y,lu jGk HkNJI%BI8Ì\6bJ#*%TQ"btb23C lEΎ@h82F񅘨FuvuK3qʨ^"N.Q7"∈CƏ"hP^v`m&mA+a-xݶBX~ـX'59PLFl왔@d*!.`JU:H{6NJFяIw'w,9*?Aa[tJ'ykŋluӤ_5g]z?|zy_яA ODF(S f [FEߖP323&9DjS:  6qyli;yb>;bR3z{o @^>j6}!)JH@Zgʉj:j%QeFDhu-!^ĈZ e+Mk_R)բ)0HE[$|$Ճ} 9aʚXtIȄZ2br|yBDiD3wkzLj= ;nKw;^jr=>nQEQѳ9d+JpDBޑ8OΗ4B !ObV#mCd> )0'Lr9ly%#4AJ#^u18N9޽*mzixiCɓVˆ|R&(Nd1)h?j}3uJ?ŀ>  Yew ?CF|W=otz IAFQ[-BiHPRa7S-*ݓDa0~HhZy`C`y{ ;qzy_NmMư?H+,JUI=܇<[s:`O'i%}P?zr:ȿ=H)nT辇*idK'qOu\Nn#yzsy?=~W c |N0ػ}+iv8\}qc7-DI:ܛ͎^E>>oyH)a24Ng/xf}qs-^o5b֡+\r 5-v ̂ w~Dz޹dm^M7 `U 'moyIӔn2ljP_/;1xi\cW!Z3F͡~6;g*slIZE'WxSl*1Ȍ0 ,m=>EB_TI&"ߥ|q\qB&0sPe >s5x}ʌ{S$%*!E0;P.;@#5uMVkЪeHGU8QT7ʏTEvC7IG_}<+1w u!`BLjQbj8Ssh19< Y[͙9lM vLD2.H ER !dm0Cd] i5V>ETW!M661gO% t&[\"d uf<1MBh};\xӫ93R ?Cճs'}iӮF %DA&)CPŚK*>L^bp1 &R cE ame&z"OޣAG,N\YpW#Fe9&e"JcbVTŸmX׀3:c; rLǨ,/,lVoߗc =2AtX'R0EdU(n1Y+N7-1s6YxTqҿӱ ޭRϦ9M`[)o?wc +w{##CsfR"Mx/pxcr2@ei Ĕ@ 3С*2(f,hsÞ2JϮf^KyBzCJ2,o"/]Q2ŨԐ( "d!#~P0@Vx҄lYԸ znvm7nӇ|?]] S>Vyrvu_Oy>m>߇~=7=8~ Ћ*py+bŇ^|c`^(| MYPwHzHԖmtŤ(*T eЩz/y }g 8)ꙫ]9"tɤTrُT xV >6H8k̩`ϬS :]4P4y wȳg oq )ޓ^ZiOYPĬ{v=aQ=ί4p\)$:p`(MbӃ>,@oUVwx{yG{H%z)Z^-jҕ&BBǰՀ)Bhۣ]=>[S>yrR'˜R"+9%Šc0ɺZGh4cϢ/8nRsJ:Xj}E]sz.k<_zixz5;G?SEg{8W2R] 85~JocipS?ׅ C5E ( }kyÆ^CEEO#{'o;Z*"BzLVtUxqmlkIi>Sz}߇W}+gxg`WWtFZdfC5ro|V&MҌtY?bʔͲx/u;]7_ypi7=5wO2 xf3-Y\F=;z&sʚMG7\}.uӜ1o?O \:oࡼya h:bmUgbm*kbqbRTӜE}r5?OxI8=#wP9Wȭӥ>wQ~m7;QilLGqrQQ& Gr+~%g}@Ժx{Y{ĕ*'^Ȯ}xoN]t~dA7f[U1ۦ:kV~Y.g;?W0MD5wz3Gi~6h64R[ˋl?\E4 'F@a%gƖ")V\U 8O)n3c?+Ve(^I8;9߳Ӎ=,mq9h!~Ҋ-)zKŨsњTnJM9)"r@c:G^l+Zb)'DR!ҘC$FqI )PJi[]7ښvxLOWIolq2N%)Ĩ5q8B l n9/y3µ-uy`2ypfB-PݠsDIPh`M]*6S} AIRW.)>1h]j!y &ϭf+P\H7"sXoܚ{$&6>ǡxlT*OsYñXPyLD:%6Ld-1 +@Erե;Ҽq4[ȄH#\C" F4Q#.ԓ-A[ԹDnޏ;pVzcK?ODDHKnX(hS #xÍqmEW3rH \(QB@8X@qNyH3shИj=ߊiYxEAf["rG68hS$ΣA'E}6|$ium"n͖vI\棢22')UR +6*GFT:JYk{ON:0͋I]3ٚϼ:>[L,9eI!)-=~ktҕr1ZOh8>Ns3gůxS\~|:?s p'iUQ]6N(j4C0J('䚜+9ĎF]~:Q^KSN5kIUG'$-,рD鸔\N0Sr'W'eq&Rv:X}SVJzͧ*-tvv:>Tc(Cs/G'~_t['6:qG7&bOZÏƷ7ɅWKbt6Wc.{r39HWA{/!ZleKt+g7(!~?&8O@fmz5Sy2զZ]WC!z(FFrÀWߌb1HEV:})#yD~Y_.|O}o}~}߼L{?޽y-8Gr"aU%;E{MKV^4b^O|vM]^bnetɭŏ7zz?)Sx䍖٪q/WlgoT+*z*sTP TKB3ѽVHt64_k# sN+Rz|vm'ʥEyԚ됌OUiD'ΆF啕|>FÑ ,g3+GLQ`)&*a\&O9pNIw{:XjV#p<@Y; 9LSWYO^(=Ɨ6rD*պ@颶rRid}B9stm)g瞵g$z@DNpI'g\`iFAI* ٘qAVlvQHG2dlrT>ryw$>vap^I{kLn>űCGBGhz>^ "}h\@7dZRXJg+IŸNz .H?$ҏ A"\=0YAXKQ`YrחK%LP5/Ƿ>r^D*IWCio̧UczGQcNY{Sl[>F`% >P&(pE0PTNt( +H1[nx䴑uPf{9t>RV;<>!uyi$kU)* (?HQ\Dh l㒍̓s Y׆$ZbD҇ TL$HN ExX8`ʈʡ,YAokx&Q9ǖj{a1)St=tOXvEmCh]~ե&ohTfvKgvL 7¤.Z?Z[ch͕H]hy8) 1p.1 OUqv;Ϲ:FJ>ox7;)iZu,bZ\b=Uw*.ٍF%&DC06QHY޻"˺dFfFE!KvۊN3C0y<V-؅%od(CyBn`{Om&R6S ؋f*tnbF-RW`OF]!𧢮2]]e*ճQW \ã,\%èKɁè0AT2vd SWN=ҙY;^b1arr,x5gV4N=a "g7__}S р~|JJKj $36:%J/oma"nߌ{uۋ78uxl:a GmЏ+wpGUKe*'DLy_tv )zۜ3bBEN >M1z߻OMdž&n` c˱ s5eSP&sJa, ̻l6τP;iΦ+̦uuwM.;܉77)C[XiQ!SJn YIptՍe M TX oPp7ee9#V|ղe Vs!z>3T5cgRR+ϓt۬5զ]I#H,g8xK4x(W1CIIxHGqI[ZpZSo: At"4 )NHCAk^$G, PpbiBrMk"8Z* \Y\2e$?:aهtuu]k׃;e59,$fqtpiJ`yMPwpt=CUƻT|/~LN%q_SctdQg~hЅrlɟMqnn.C],rUNlpT!Vh(\9}s[}^dڗUҺt }H.c[kUg^ɭ},|l7 MjuٴetdڥTTk`DQ#RSFjb{5Ri風(j{ D$3 1jYh H4EN3A2R.ȃPD-&Й;+9Sڮ^!m^I+˅bNm(`,X:M-`8!κ29J(Q냱F]݂=-ت#-GO\UNZ|*T:~O*{&#`e%g3b-$*jb w MX5u[ujM9&ddYhD& ä#V nW\)X3h`]須Nz,Q:$e( @< `:_PMUhB֣1Xőp.uXE0Le^UK,m0ƃ7@|;UL^Z`H:P\@LP@5P$`^Zj7X$kdd3>$f+Vq5Z,{Q; ;e>~ qF5f l p,tr5F8vpicr T{[̓C&zXQRnRİ[-F6mpqpu,Ef[DrG28 `S(΁sE]2NGҩ# l/ۏ HgD Rr@"Q* J3< 8PAP$@2Aj֎@ܗce¢H[-[}4QjgbkUV+D9HzܰIؿq/+R""A6'i.4êa F*Q ` 3O1 '|w|p\}BpiqלNkmC_/[埙%%w&إ`a={*oW@.in;us%oCk)NΝ Յ""& t:bZ'O HD0!RL݅г݇| w:ܛ WE-6`Ӷ!Jr)L0ow5)~8(EO,T'M}ZCS(E^NBqo^1sm avc5 mu0è46Ҙ&[eZ_ /7ōWwɅ׋sbx>=ӏM9Tv9?ލ ~KPUOTzUݐn4fYނ)pӉ}vz2Q^ ZtUVU#Ey SIs`FEܻ<F_֠ /%5/*_闷[3v ^WW7W{{z޾y+_` wU$,ʮ$j>ܾkvMۦk^'M!WB|bE1q% Jߵ~~䴄O6Z&Xĝ>dԾ&(c_|By6 KQTx?6mY֯FbXb-3 wɵ1)A)|. 56ұ Sg^0'-ґEF%0sPXN!b&4Rx/@[r͞N-{:y ϥF`mh%V%T6& %&D⤃<19ۧ*AukM9Ƣ;}Xdc?xÌT. :::PQg8Ng uHaaN(E3fͬ$C!tDXHA"XHMI`i92Rq'8^Jr)(E(,kDsL(QmGsT Ȣ`;X+\$Z(D#FfHX $1rHk-X9͓S6r= Z6DG*"xgHA~]<˗ 2ɴ4<~\!(loN?Xrs֞EE+9eODKp#wpkHМi% \Xk{$ iԑuTݽfN *m/& sX|=K6.y4$qi_\@VKg`lÒ LXjb,YydcOPJA4!iX(` C@F,L%0l\݆^mlD["݅9Iv/43ֿ(ҿRIW];e*eU[Dғv԰JyUBdJ%ȧ/H#^|QIM^ 1ۻSсK?`J ?.Fa/r1"Gߟv?j'wZb?*g0((e BΊ5z ?14 _|3BQM.$&fIe[mcYNlZ{3؂ Ӣq]~*/O,[/'w~.O;~ְ όF|Z?hVߕ \yٸpcYsJ*X?t8lf?&2.!Wѧ?l n9f `hf)1}!'0}fkBNEwaI-KQ+=q)W`!#'N:NmZ :q4"Fqj[̔J+yޱ ќ~G!YFDcjs6femZ2-cFiEx'3 pVx_U+K(v)rm1qP1o|\a;n^< ::^:cRGQJmȴ1KJDc1n1n\tfOR״>CPq=z;Iαb*WdEi<ބyr:ކVJ>܄~k ҂VC0ɅgT}|WMjbttz;l3ZO+.AiiaYh̻3zhƙc.W1ʉ,ӄ F೔(kW&6 }ʋʳ͌Qh2 FyиDNPEEp3-&ZhD0LhX) y̝vX-"ro0CJf/h%W9[i.383KxY>ߗ۴._α}eSUC#L ^ALWOW %Q 5 CÏ5@U`1 eZT*J,ˬIy)3Ί6Pg U01x%Hx)aI%}p"Iy"I%f177gJ?uXƍgC tA2 <Tg?Ϯkm)NENWAG"X>jQ(R" &58-9O> s?!aݜ\qzf -kt>3w㛲VgiGa>pi R0'!bcOFs$8[E<Nc4Xhm P0etۨ 5 T!SFf ^qr2dz¦(,5vEcYȲdkuv'34hL7sNVkͳq^Hu42E f aό ccP~%fPv>NoP2v*#DMcq!*CdsMx2( Y LwZ$"sL@hQ3- ʵ*D '-)ǚ锒=DO?@|QMe=.(I-@KhCc <* ?s~]`DʩR ˽=նُ,zT9XF(9.j+VT2m_2mVA#+L8a$dD#%H%sI8'&-G KI*>jQ͢tJ!Gz,c8 Lq>^m|XEfy?Bg$ډ6DpʛC`R$Y/K>'n$QccQczD(y.R`\QCԎN TG@gy=_8Nb(NHE#|=lFMSQOSdp[Uͭ5؁Olo-!ޭ[}f$%jFHZkT0# ?BQ Th.@sψH\9OOfWNkC* X$f0P"6 `62F,<;yN9~cp\>3r` ĩ-&GTl)$I%ZJo鵾֯6{PԢCg%C&Cz)*m n_*Yۇ77i.[2a< X,$,Wpډ$s=Y&M|C\ӊ {-'&`T.U͝zTV>,WT%9Z-u)aZw[9dP(+\\.VkZT !j(}KhEZNYYܫ׭K6)06C=5)IWn# AS+T}=tXkjP>C`E!8L@-cJ x%KM9N](?}iRq0_(*Ѷѫ.Z 6o5Xݩbz^޼￟(%+jʚ$o[Ȓ79.UnrTs`R6XMv|P}#jvlQ?|)ULM}tW-ӳiW%6il ?Yw9F<]<7l9-tՑEh u"Qk=u|l4g 5z}"_j=wB,g~۶|ya6˖- M;e7W,D~+gtPh;%Ah<=`a)%:+D\X;RNWuG<0j&UaN\)~I<!r~23ԋF2E8JRڍYdg:MPyHoZЄL}t M>z;V}nb-@ T -ڭ|hT_w6(/X n5SLF9TjSq3q\<-k/۬ݺWfz2#kq6p5\ 5D:\e)•VJCv$ZW]I"7/rf7gwӤoS]_} huB Y;|u$\P]ժ$|5&_FN_7׭W>_s/sX(cwf8#J0ZH%@,kpH94D|GZLnGJۧ\i/4W>\iu"b|]_ b99 NeXQ\HꭵXf>./a7g%W}Ӝ$Ԗ}=E'3iýn T^89e^0q%Sj+@_{bE-OZ!8tKGI;dkç8i|WjuĹ(pNMշ!?h/VqS `W?4]u>|gpfs*br(֒s[o{ %,,h;9`PV;gX2of8c {ZfPof^;+b.OP=?{g'P~E6w l\[ENJfj#h'Au _H,w§7?]ϟ.ѽqgo%ٷn{-.հhya2oioqwx;{ȼm;/tkGX@kj.>oovץ̇OJ40(Ʌi["=ih~v|)<-9IkNwގ_H[o[|2=8EfUW>g/mIt^3|?,Ҝ<"Ҽk(7O}7?mrjر.i~*CWbkB9\`R<$<@p+Dp| RpP[p&Ǜ̧ϠJ{,ml# L&Z&L9gՓˮ 걦0TxS͌2ւ`'&-N#ԧ|~|Gt[L6:܃N !N %xKu0FV!G3V #;[;Npo,3sULZ G~OS$3\L=%LW2 ($4{~:@~g _e>Z6Wh8ag1,\,&T 4'/9cbl'de{r>KsCdز=o=ƜFQ7x+GAnW9 D=\Sm:}Xd$VKQ1!0dqa#ry,,RwzC9>#q45(ჱFB>%ÿ6(b -ޓj03.Vq;MKrhef@3U5^y]b^U_c'78w |l{q4<YM.wNNr:epj%M[N5R6auĹu))3Q*dx c:[\j# Q)hs^zK2T|UO:ZZ!H;2D!MZtm!NDH<a U[k4OYY3+`\yH ޹&£F6Abn[gy{={.pI( l>mY\~6qe) m(n,S23.׳8Nk7DZ Gt1;jOxa{31fZQcחUΉe(XA%qHPRȠH%H֗IdքQB-p))\:@^D?IPDRBH`֘\{24t}bv[B'TЪ&rU'_n~Nx[,y-4#ږߔ?nx"rCD5j %|OU$2{I; YT:E#cJauJ*Q`Aˤ=q-C3S"/sQDNw<8r= BgA;ߜ ?]>, I&g5""Vs@6.٨&\w;0j+%Eˮzղ,v{'@l[d5bJ3~D\ dm2,$>uF>@hִ4C,ĸG@"V!3!)1#H`YA;@؍N-:kuyn<7o=U={A;܁m`/~v/? 9<Y#a Ц |-_Aa&=οd&Z8#7!V h(M! W\Z2!; ΨΡQСBӎ _pt~ZZ׸jD4"x@T\f B X}?&<VgVփ|!YE81iB u8UFO?g gIK (g{@*ӿs@%F #VJ\5jd%@Q4Ep~jStL4) k!i|A^$ 5_͂f)|Y|pڝIy`Z95㬿|ӱz޿V~qe}c3=]}kzJ@',;x:n~c[q]õ;Y{G{?s0zwe4'%'dLNεrbx'IjqmVJ8j=!ma'Nq9¬ 8n}VJ YRi5?,}ީxD;E{5 {w:n~Tc(Cs]{F 5lh5|ٚFgcBm=Z?kW~/],>I cq_4_ǖn8GzYN;AHSO5$ȁfs|D q8 G0b^Gr>Wٞӄ4^kóB U:;8}~M|nz{˰qJOho4we=*7/||OGg|dz< >'Y9TSX4 o~>kEJu- Z|ŀү) P%Vl߯Z8ak:ߏ\&wz0inD7m$A5c A)OϮD\ԞG!@U'Ά%UFCpsv #&DШA0)&*a\O9V$'ә-p͉U9"?;;_G W;۴ y6?*u+)g%JpD rl唐̖K?<|ç^,I6H+&E 9 T49 `kFdcrU'=[Wat/;Z1U3lwKL' 7W?.)߃cǎ^x6vRNw~Z}+@b#V@hy8i-:rԑ%GOqN#'AL)(%^$dH(wJMRBL>ǯ+LHubW뛯^L9eӒZ"!LY]T*{2.8Yh 픷.Wb,6mrIi0xb[V~7cR)cP(qf*Eeq}+ҴRTRR7X) a|^>lO; pvJar#5wcT4 qh<5(FoqzѢ5JӈHj ZNI iq8e[Aꐌc% MUR*MR8.U lb)abJľvi]iMo~{7Tv0 WNGs܋D)ə$L.eHCЦaƞ p|MW)43A' +Jo.%ÈFðab͎Pcf<8ר|)Gr2r@d1\QIV̹|{eĒTz RVQ#քHbWHcNB IѨxXL6qۂǑHYDdCG=Id$@ b>W$AbL*("IİW4A 2θ<<apǯs@EWB,6KEZi.MuJAD)w($Jd+H蔡*)Y`&Eg~.>.͎cCv>v5UF-lٞQAuگ-n^_kQŒ =*MUܩT "\Tx͠ǣ{97LL{7EzX36R9?5˶MQGl4p92aH&-uX4FKH%I8'AhBSFRzHd,9$Q,$c"< "乼m÷ sW'IP(xr,G>)S IQ`v ]pFNCP^_)2yqX}Blnwk5vY$OhV^&e$%jF>i x `ZG#EBs5wH;)<"t=|mPeAvEG@^ % b Z: 0U dn ͽ1 8N_9޼ĩ-qUl8fSFyt{#$Ip贾ַtVmjѠ!z)*my!d!Tij/n^#g_9Ga$EpimEr Q\89p,RHDaдbi+4&<`>2YѠo{'T5=[8[rP i>~Xes= 0?/`EZZKh]OB{aʻh1/4ilP힎úGϝΕ > QŸklV6g>ݿtfS:2yLȤ aƚ\%z"&o(+EnAXHw)>+Nm*YV`v2b ^@Qp6 >uUZx%'HZ2ށ^3Чjg)57Z[w,/l=Vw |`?;J0|boχ8}?]_V8 Ks6TsWm*cC`SL~J^G2=d}RvDZ}S'.\}Hy'^HLe7 @ @e!25los}^6 v=5ƻz7q؛}y!Ě.zAQ{wozwpt3l_Y]U66^Xقb0`Rܭj~m{{rBU7b1WbZH4gZ5!ОinL%xY)*T2J3j($z$4IhTмv彣d)%%eZ@Ă%D"n0aW#&'sg &62(0ESF%28OUF2t*Rr6<86/ylom^޿{r8 Om x; mvm3'$㫯W xq9IE roTUz\b":TB+]9'\,jީvT-Jms7 uT@V^TmjrO+ &V"Y:J~$E/5"%q*MAL!I˃}. #鏺\h]K|^`urgzꉹGݬusӐo;~}0χYm/ hPS#OL޽{}-y(@I/_zD^oܴžŹOD 9_65}zc=]-ǂz˱rjR䦋2bFH]!Sr:t 5Iږ)bo/[_raj#i~ij]<޼^Pr?=){b=59F݋Fʬm`"6\n}k7_/%; 2M0@`N$m&CVr"k}\Jշ1 5qP7_tSNݯ9س 'PtpaG0nɁag l%BYaa@X W+kFW֢+KוPRiGW.p{qEytV4$W)paVޝ7g#f_7K;>g`a&S'>ZƄxGBpDiZ1j4TMTK״Pz1jA+ҕSFW] -Wܤ - t1;j+`KוR)E+}m]5RcϵFt]-g;LcvcU/AE`]#jѕFu%uEkz"JGJqJiCcWJ#t5]E,dPJqԢ+J)4v5F] mER`ɠz*-^WJ=08v쏮A==v.y"0Z>ғA++tجּ8\.RLul@9=v@:4DhZpјZ4J״RnN1#noΗ^6l]7]sf5p+_h}ӝȍZ-]r̒mv&Q]fq^߭zW Ӥ`c{%fki#E<%Ds|U79yҷ$,n\kI˵oߴjA+zo*^e_h~L5o^hr4);i=9wM3o}UNykpXZn` [,})bO1C/'g~yۦEP{2O|\^97yhIvts&6F|f6)6=p.grnzlb ԎZJ wM#cb;%]ÀFWҕu-t,BoڊtѕREWJ[sfsQ{+ΠõJi/]WJ']QW 5u8z+j+%W']QW9HWLѕb5ٔ6?Ԯq6F]`]UOj\4X?Ԯnj8xcjZ+ΠR5ѕrJ*jU|`ȹj0{ \#2XC(}aU8Y B@sS{F}tkt*Ҵj4hZiV&MPVʞIW LPC-Zo*quȃŊt%l5R\Ji}RJ:cԕZUtz:걏DKJוR38F]])p3T +t5B]bER`jt+uJ)fHO$*ҕW+ō\J)-Mb'?`6DW ])m͔u%HW <o|])]O g0å#F]]D0]]f=8 Rm|zytN"])pJqkѕ#MH 8jJc=ѕ>8{Ji+)c5JV+=|a?vv5M` *ZtlJוPL""])pEcWtKוR"LX*i"JqmEWJ{)t pbc:ZAhܱ#wqJe 0@W0YhVgL6af©ޜXAg'o;$[swJ9eX }VtmM. \3J̷$m-S(aaL[OO\a-2a:>pKڻ҇k$>$,jKN?IWUE:[ _cԾ;_q.oH2/˄Mjy8k룹m0jҬyhpY'Y=Ra'͎P."]dg8PR\jѕҢ-]WJä *"])0ӉUcK2]QW>0ٚ: ]&ZtKוRzt5B]j}jƮ])%MǨ+"ϮΠFWD@5Aut])wF(=+J}=R\Zt\ڎHlJ[?YtpץvZjFܳUjfW)w96o>GZ'ʫXlÊeAHTIS~Ƽz?쨇eJ|92[d#2B7=?6Չ{. ]y경'f.¿^Nևӆs&笏}]~W4׻vRM'°eH MiVҜPnI9IK9hbs$\ӵ $cI&PcNZ1;Zˋ7R*K}}50}dHgB0eczfR!tKel Yb޷޸}5!`ɑI(k{ |Dfdjwv$:@Iڃh+'oZ'vR:MIg(شY2ZjwH}=% i5k)"b3w݈F<GJ"-~7W&Dz')$FJy#& EJ9~7w"֪`Ԧܓ 3Y6A'mz!z&Ym$}o!}2^n@Nٷ)&ҬKoOrdHߚk-?$ M+V&RDmM$.RBFcϡ3 MGtl: pm[׶.(Eh5 Pbb>:nHMɉ1j(H)2!u;a.rzg@o :))t{겴.],$w%VB+v`(<$,R#mx;+Qg Kl!>}B$֒z%pz˻#-qtb%ńMDyzj\hBR,d$-NnyQ.Iіǡmvg&AmZڐ=:>H^w~6]F!]KlƓ4J-AZîMR:m+%mzw)wA?{Ƒe O;شUc.| IHys)?([RG`l6oW9UuLt&J9d,خ:8.t[ cf2b@fPl\F)EB> AA\J}2ɮ3Jt"T]zDzdgS 䠱RȠE $e5̀j@o]\Q?A2n@Ơ) `)PNiPHY0(3mH[(2·P9+zXĬ'u"p`L ӌ/˃ e3k'DqX[9 i+XGeұK \V9%awX7;KQ@;hՆ?5+mGv2J"L]Vkx5{RPR3*Fr^55$5S)Ah8PZMil,#  v/PUTy X+=sN།qrI㗣u[kwᗢH#fD' G1!EIJW!MȾf|aC9Z0.$k.C]Z|ou/#B&>H`"" o `9:py@xc#P YiT2Mt%CHh|pL Ρ&Ze&5:/3(|" H&jZ5 @6!+*]A =-ZiaKP>n;lE]Q("Ndvik 3`Q' +U 2P(EcDMnƂGtpPuJc-и'`ҕȪT~19Mk.,aF3kҪTg j4gҼLFL@Zv!;tsp˺q9cP׮BT4`j7[E05^%HL)-|X@Vp*#-]A-l1EO zȕIѼO30%7aFF9>X2'IB r8%-Cɵ+z([0ix "ixXoCnRM5墑r7pid" J.YxY 4iU.d.P,=uQ@Ւ-0 Uyۈ'34!wrt o 0@5Kh!wܯzpHT)`2,l %ހG{ 1KfQ0s#Hi\zr ~Rl͖\]ͶK:?krlH϶z1k[7v}zmq6~ZN/T_yOA WRN۽n枂g?g7nKxj\g8>b=0~h $}'uf@e~L3#~upF(<%hh|J d>X @>eY J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@zJ=38%MQzJX @VW@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X d@IH#R`D(`.(` +` zJ !TV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+HoJ HQ#P)J2VJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%Qޙ/O ZjJO=.o7oԿVWuZ 1pL%C xKAp ~.\ְp)hǤO"sќFF"pIW^?֨/*[pkqMWwVuݬTH}t9]ݷ%e=AΒh~ͣ+>{6Wy 7gžY}&~ofҊ[E?ػXW 1-P\d?vu=>_-5a+C٘^Ë{q =Xj=/þla-mt1a}yq\y﹌w ~nrq͢P5V/.j;rCm nw~~b{3{hlsA`oA ս7xGZn^wɾȲfRĄ 3uEyL]tߒi,fd471р ̀Ա ZY<Q>"}3^u什j^G嬔m0ݢQh.J{S²;dqu~~SN ?:˫Cg,`?.ʼӫwNh>{[|Vȝx|}[Yg~(WNNa[XMr>,NO{dv[t)t竓VFkUV&7I'_r1CagxT_o17+CNA=mhw}wlwX˛ YC<[7o6ipD\]~bc\ _/ϯ^߾kvy2%')j돇:}w =}ۛ+X0(^ჶ9_j0HJMg u9@UheovX ce2s>+ c#YXZ-ڋExM7>V?)fC[qYyc\S =tQQh%]T\CUM>ݖce+풑׈ z\~?333n>l}2n}l>]֏x]wO"MH-Tb8}>}8 o.(h8U*/Q.n/s*H_Dc"+90@'9:## k,𞣧k-X`7?cW)Mj 쨪TzoL/? p8n|jy&÷E.n\Ey{Z^ }+KI](3YEO \8I7:(^/r[v'e^7jQ_] 6OgE<=Џw@{ͤ+.Egy5N9oUA2rQxRrP=.{Y1,HxV\˖'bJtN7k^ҙ"Z]^۔ZYp5ۥo5Gdz")9ny ENse165tNIVJ*B3i%]h}\=Sd_fhYi3cOZT{rB*]19?ʸX$CWݡr_yc[s\2ٮ#6ZUt>Yik( :?{׶Fd ؒ~0^w3Fwa2"3%)RMr~~F/%%S)lU8y"2.<'i~@tzEH8Ԣ< ʅಥY"(Gzͺ8/|SE|FpCMh@9 YO.XmˀOI~èxOR?^f-WByןdaNyGz`)g DݑN6|;SҖi/'C)mi;2C θd\Q>Giΰͪx@X\C2Vq%ʯō tYr̠4/J`wgI _L.q>}\D3o(trr<d_d#'oFR  GŜ@1"v(t,t1{`T%dΆ5|>DÁH*ʌJ)Rx!/Da1aSeOw͉ב=q}MZՇLY,~ꥬ$nhR{l>\ $0J+\t!tAӘ[{8/;cQhJ.!rN 9gGE<ζ=Gƾ5MNFh$?fC(I /!!e`MU8 +݈Tٞa>1x*A?!x sQFT^R8̓s?O HkE ){Uq۴,͆W盇? gF zɲlÝ]U_7Zdқsہ0:+f4~uqN2tg:tسbG+ hx9/gք=bj4(QD7pN6*0h hX {JJ䞋D(-(%KCxIxvV:ZcZF-9Qy`I8ֶU)Ԛ )u@C,k }VdRC;W*[ae2ʔ8S(tHhϨ[y͌f|7)Ljǚ>fU /Ă/=y!c\(Zi.Ga| +RM; 4t!r1'b%r9вzbOŶ,9b ʼnh"y]:v["@690f,ѣق1H/]_) sEla *=^=Te\~wjDc9YLې4R%lEMK6nYUUbnUcT6eyT:)t2=SD)rP<`mC8+u-rz\óN=q=1c㫎R=㫦Nɺj?_iv}!ە'so㭟6WϿC5eAqJA6 Eh gOāco1 / 1"cJDPK+Qpc3bq')0T-j!=U2T\z פ^|n&h.H6Ҵd>dsQj"$5,dMVmRJAW*ʻdo_InZH v3i0Gnn'm᷽}B3C=/}Hi[Ľyyz{"v}{o=/}Hߓc0iElYFgiҍ>FJ asS8aR<]U5K ut68>[\NoHo*\]wOݥ^;t˗cnd]Gi~B6(K A)E >`N~@=6h06=黷b^c~<8Ț-ώln{|W ҁՙ{e5hZwozm=n-=ፅv2 B7#6 ̂$1=_}1Vq<[RpT* v TU#-f)[[ZE۔2 R'[`zS՞(U56Z9qs[řSKddVDΤ%{TFf%Zrj^%՜@诀YՎ;l=7M8wg4&) QLzWio}Cgǟc ʨjsF'zT8<^J,ONXY1{$@I 2li,<1d_NWK!SWqb۞fFM0?_`@.m}v.~;%P1i B $%C2 PLEh.8sQ3ä4]NauP:@97TFcpgHC n TYĹ_@Ï 6$HT ɦ@TvlS#fN S9i1c/D$4>L#a"r Cfh] Z܏0&桠vq({͢s\˽S6zP<WJBЁC ff4YZ&fR2WPe4W R|vQ:= V1.(3ynF3`|>?&;2My (5G#gك"C)bY1ι R+P \^NC÷ ;ҤIVB06)iGn@?z6a$뛹Ui6=lGNGP-Y1y<_UGrWk}eOhnFEfd(lU#6H ȕ BbLӊ,xd:?=tBO<_n|Idg>;S(O"`6r'<yvP/^# \VɃ.0DS,@kLҌ&uZjZXCY7E>  'ږN, ėjxQIw/AYi[2a9!dYgA&c,IR@y" w/}BӒz A3/J4 AeȴjP"rZA$2 ;WMZU4 `ci:\fstNaANKDwrfX~+%%* k`ˍ}/H{<KGRκ6aBwE"vx)2|ٻ6dUg7o=R߇aWv%yA` Oa[Y仿9HP9ȘbKstQ:N^ʕ԰yW~ F\F|=Wt)P)hz] d`O<~` 4kk ֣|)hH5ѸqxƔ Ou0|N|e+SVr6D3Izٝ,ċ$//:Fz`<1mꨬhӪDiu#+Rf0>Ks4`h}ۻ*rM5cc\jǣw.;dl^qsQ|&tտ\}>uƣ'd`FLB3%UFy}i)OUn*oꩇAZLbrT.UF%ekntT+UE^;.TDD-d][H5r(T9:H= [AB#fF֯4D<\$kъiz[>8NKQ-g*!yDH{ÖwA2"$҈!'&HN t2d3oZf;Ӷ]\j|7{L-PT;%9kN9+MhS3'J:.R ]h*W~8LsIb7eiWravA;M0{-Q tpdsF˅Kd4ڨl7;Ja1*0B8%lTMBBX6ʍ^yw9mjnbt~,Y(2 &\ >Z M'A%py+cq $S "QXͲjS~춅RG`qBS,)Y# $z@+x02JILV;oMt9yqJ=/?hL9m?v7Uq :Ca[b90*3k0WO qǍj}+) c9nlLYGH(򹖜N)bf!Y=ȕs5[$Y[e)\K\ܣu=h[gXCﮕ2CA1F: l@@~rVk/;@H~9X1VFșA-T'f|wHO/ Y>_^6\ ߇ ga >./^s^$s^Xga7"oZMs7'wݖT0m@Y9Q 1Z DR=R\A*`~7p.Uis-c1KQ\x ,3CJҹV)?2s`ƅOe`k.PHb?33`OLj9+S 4g5?jN, h/)t?uW^5l(o z| 2 I\X Q+VCzUKWޘIux|ԇ<O9?j3j.5ِʹ20a L ^@J:`>+-r`?9:-ы _NBxpZ=z 6?[6? OE "LAX$Pqw) ᚕVw~.XnA=Qݬx2J{ d?}xI_E`{)48՘݀>?%u0<*?֞zW*k>TWdհxE/|11 XS>j SKB1oHzݧN| Wf\($ 4vdXѩ(j8PPf@ xZp ; T`.AG<'[ݎKR)m-hŻUU8a᭦P~|-E۸$ozƲVl3"Vy'oσc;62-=|= %wζT<0՜gl|KA7g6鹯^=]׏j).{_MOU`48J,me*׆ &9%,Jkomn%Scv,ood>S(]RFЮ$Xg68+]u쵲2a@pa^!X&HSE57V"FMsFV;|~NyWUTWVYڃ L* ȷIkߦD.g۔-woP=6i|؊SG Dг쮘#WeWVaWRb`W֝zRJAy+1MnR wyϡI_0x0K ;)ƿ!E6G=&0{eRuɱR$Yd߭tt"XaӉ\M]gӉ6հi R=bW@0floU"W}aWZ 2+dW q"I )V{îloDĻήWȮ8RxLD.®B:JT25+FdU"X2(0F J2*Q)Wɮsn2M;)Ϋl`,fAQr:y褩{\$9" 13iT(XW%N1t:3/K6bWO?2&v]y!?ӹ,hug\rH,8:-ݏDNҵh*jcDj`9T{E%6<0d"XI1KJ[#[-uK[[:妇rOSUnyv PVfBQ#JUp8 ^E8Hy猋;Ωv^8 N&CP8Z NFc bRA+{HnuRZz"q)G+e&1$#4p›wG SZWLj)EcdcIͧ:9; }gv\tS`|'ӷ ˜}מPwpNR7ٻfWa&? k-RΌ ձ"gv*RBq;JG!+PR|*"a_JA)،FvoGc`ǡ^(*:mqk( KdzJq븋*t"Da:? l}."3N>l` &Ruˋ~o/0Q8x%HK~H[*f w?޵@mu ]SV隷Հүng6:bQ!qeϿ\.)LjFI+| z& ]9ܵ/X"hGA56s~6#RϓxZՑXl )$B8]2mGLshP*L@l:HOmlY^^wY@qyCIȌ"sPX1P{`!ْ,fYVwۑs"V'r=ur/DZ?*6J!۠LnL )>TDۨ;]h6/'vp(V[OQ"hXB5`cZhBz*N CJ, 68pOjRz\Tknb-Y=j͐v>/)Qx4IA>u|^XO|zFx@Gτ@?A+'ib"w&&jΧ*%:$M#%J+s8IQnAgjAw"'Jќdsk,Qp2}6Sip58dR$04ٻ6r*\j,7U8I^Ns-tI.IkW^ O9)?2 I51bz0߇nL*dkX4UpZt9b-V⌱I8&js\=DsRp'7ydNGI8~uKm'AR:{knzǹ8F6|j@\I Aqq˩ R0Ib nO*/t{-9bѩDI*#X*kGgbL@%3RAɶ{asZlRQG5r{O݄_'y-|/} K*[b%"a!f,066_gu?3Wetu׏N::ݳ3VWozU5R>N A)Iu[SJ;3gL/t^j SV*ϼ=7]'躓lyŔ  8$/V27VY)eqQG ,6 Vj(!1$@C}gbr54A"%콰;QxJƜlRstr3CL%BM%oH)*rBk`Gbqc%7)X_Au^^B$$ i@ڈ,xld"E+:NaP )h#I_"ռ@T_+9-(1rD A.sZy_ 2(k=M ֋ttLD)J䬈2om 3, | />2sۺܽ4,r ?Dy }hh-P;c2GvF׊Pr89#oHǚUꭖ̘IkSN"d?y+H^9T:>:YTAKUo%|Yicl?AəPlgU٘xUp_ø7kD e'KP.w~ξ TW.0rw&GݡVsR(O94 ep^ DBHШEIQI9tP.eL! R#[$1q6?Dq;pݜṣkhk"/$2N)"mN'L?jdc 6qޞ Y}i08}ޫge&1i<\>r}~gcc@ݔAW wDDB7ɀp'颧3Ȁv@PRfZ$c$uS>$eNYIhv%Va!P&j@yW=7]y?$9Jwxmj~M>B!^WT, Xj57R,ҟ{jvC7X,+&,|"׸0eL!٘LggtTDmSejN-3RYb(td*$b2 9;ydzdQ{@bX{0 >|i%hMǒ[)y7^\A9.@-@ ;TBt2ARA6&(9Ay-,$T F+W>)wTͩB՛^ض_!瘳EyKF6HjLd+U)ť.I1, *ט63瑂 d7rO_"n= __bMucXvZN ,B6)eRؕA mnfJұ@SQl{#ڄS`[sQl . 6j4<|7|?,ȣ5k\Q я/go_pWۻLgTAa[4Q+t]N!fs9'n4Z\_|/L ?]σo3uホX9X'([e" fL{EձUQdwKÊV9;d >vaKщkZse>"ijJ9l}5<#/H,Đ/<}Y5 ,I[Iݎ :S tJz<,t5 B#Dc a DZNz#kQC0N<ғt:z*WP*pZ %lcv#rRΙ#\v>p3}=Ѯ~ĪX&OѳddL@1 ayFB8gO-cG P1Vv#,鲅j'~byL5otyDS /Wl+X%)i;gj F_/6TV7\w.|(  Zi?wFx4|{zM sPw<`Z=,#q߸A>WM[eɗ_FFW{颍aղ#6=jJn"ͅ[u=?F݂bPe V-TK+\ ~?W` FˋEek`淄{$e&&+ݡ09igɿ\|t5/:vu-h@Ges_#g^/㲚.0̫Ku1&ԟh)FݡVtW1\]Ml6=&H#݄%80?{M/2jha.dk:w.tWX&^%qxti8W `hrJ[ius^SWY?o-\1rrx5ru.O˰𞆔tUzjB?C5vgG3(:1sVOlBz ,W>:6i9Owpp_jPgtP}$1>'=y+FWįRM|B&y])ZFXO]|{nh_m' ˤeK!{UB &IX&/.Ngj("zw <9}6|h {̌{Z[I$$6W2ڌZ,R`@ H媛\k56N-A gRD-/<{zU/ZC*v@orުQsh~N3x\]Eg T ":5v \b*]bIŎ.T`o.{s G+=Kgp_O]ڡɎ=cװ*Q'>DwuE@B*ǽ[ְxpkmFE棊Y  ɗ n#"}W--Y)[ ^]%YuK߮SG:<]}u3B=]z5GA~Sxߕu},c8\ 7ۏ|ayT>?6&wz|ym WwL=<LyEˊ@{r6~~M ݧEۂWF:uyDgk:=rE1s̊b]Fu/[+lbH@\b-!D69$R(Uߡ U tWp$%G7EҘ0QuW'Jdtiʜ]N|L\y|M+ﳾ]zyIGx^)|m_UBqWmVI8,Et|]<)յz̟=qU)jG>?OػR)qjO{ YiWb|~[ffꯌ'&g,f5zfݛ9?e4 C|.]ܺN7;_5of~nVxav7fy Bh7+zXvh.G߹gs}?0Fio4* {V?$`^~58+wV)Om3[x! +ct*^Xk=t(&n$yGCB-wY!i!Ή!ByuΪ7]ʀ@ΖN; Lw>PD%%pDQ>iAy>}˖Qg>Ԇ w]AhQbIKCv;ֱjZkU[ܶ 05:ſ!CΒ5ip\p5fBoy#$/4ofΎ5 5@8qh4I%-w I+v%bWRۜ$e$aU((HJPqڦ䜌ކ24[%dN%YvT0\atI.91NifC/4589Asb1TP1nFk,:cU*,em <>g9pZ+|^rp8qW5%EV3x-Fcy,"xv d0[#1rȆ P5u .&Ndy%fzNd>@ ɮ}^2U5Wp/Z;\U+p-h7+puG7Wd$oȬparF_3?~?N.&nM;~G7 o6npyY'|" UWW<@ԺoQ8sCǼGu%/Qo`ݳZ)G'z=vbV؛-WŴ!IQfhorqRn(O:Y:9tc:y3 (£S;>:^EQ ^>,2W\.;A& pj(5m Ŷb[CPlk{<νtJA#5i<Ԛv΍Խxw񩅎z,3}֦pȠ!D2U@U%KJNZZ$ky|ʖ/kaUB|;AMj(MFJk Q(dai Q(Iz (")jLUs?Ue+ jVfEj娧os|\b>TBXS!´ bULtt cxCG@? c; x=%}6dS&[o5h2J( غZ?MnJ?&洐zgӳ #|e;mCywкDUL]<߳" ޿=yVijk ec 9IH9bFRmG+AL%rA C`0|, U сD{p3s;yWU7l:V{} YMba9"g,s޿sf,N׊Dz-堢*90Ǝ̜50r+zUqz:+yz#ߛnlW% _TZJ'4+cڻDɢIRRK<1$OBv 8[R*UO.-ZMœJ!'W H9it!̜WiFƾXcXVO2w4c Y\lp~>x~_fJPrcJ Qbc,C/w9S:ijHiUaS dOpl'XLa|("Z:緕9#6Mi%<n6:6Ikˎ쒊NXB)Ef$,xD("T.Fm46Dr(#df8:[ƚE4**L1? &96NaifxU6~zV"ώ=Qxހ"nܢ.6:*u0Rv fȠ8bu0c0t['0d-}1VdA3(21HcRyVBT_T xA+sv?ĸ8uLkFɾ(pqŭѠ&c"#( (R&㥐Q%KIyg;!pXUc#?([G@}ʮwBQPIet%IQ9i=vS}E|[>2a3TEqP1 ɻxkstHkIV! <#k!,f]-~mY 7O?W)L+{ߖ)_LWe2#=em%S$jX>BRS~S 4^vT^50f$EUoncPQYu AVɥ`}2 eTMd֔ů:*|˘[v=`wWhf/-)7۵8_gW%>l"`]X)yk0Au Z"Ww72 (YDaMC g.5ICEMlMgEP~9^iZHU-P J& Oz qz7xMɝ~)sIj.:hǠ+!2:di6ABJ%k`haTJ4AbcYԾp=`-}`%}%._L2~bDBZ.hRlliͼL`2^w zj!糩I{V#1YS-dq big4' W|"9/}f>t-phJTslLJ?#MG~h_~$Tunh0+!Aі)R:0 pUo&}wڨw7҆I/&osɬef;n-wnn1bU]OѡX6qoSMDt>9u3nqЖmSe|y:uP$FţЈh88DgI#h@ÇĂɉYhq\Af8ώ": q'TW6C;)<-M5^dDH΃lzNr2 o䄞^'B/qyf$\7MA)$ze3zqPJ/n'?8%xdj;Ov,L3w2O^A j:Rjs DyR&zG2_율sҽldŷIpg}fuTT b+t@Hm27g)Oc.*<X@U`7HldMh|eI C_x8} l%4%\eaK)߽ۖZޏG-+Ѿ)`;, ͛GPk8v m_+`ZpAі]ٞ - V1O]9L0g0v mI 9Zr;Af t}0e6rU'Xu'g'uG>3բJO A*x8UWUKՉzs-c1K^\vx ,UDCJҹV)82YsY&&VQR`*VP=tHb:帧Z+I?nW9 @\)u#L#SZLǂi[Ey;h{",[B:ξX$a~ n<-Tw6ja&z7x70Nz< >8?av̧aRoFA#Ryf/JlR/%rzj}?%yqV!|P" 7ԗ<۴CW`nUMbZԀxTSPI6n9Rt헥rͪ'oKt,T4$2|g6JAϪԤe^D^j [^ܘ}{~Ad:ry /~$wnVUcu;09Ych͙d<7X@*F 99_iy\UC{Pt{} m$w,L(Ҁ\=P5bG}扔>|<UX-zg՘﵌ `l5ͭ;#TC!9"B*^(ez<[5BZ5RP$~h=;G ~ѥٻy齏[R^ ]Sv^L5 *UTꮟӭxrjy".i"&-,F5r>Vҗt>h/`{qγzZb-} w~1 ੢'S AqcM-Rs>vN2 ($ rvj2 %YOG:hat#ـ|} %|r/U:lgXoWxi>n1VRdtwUߞ/o9 T |}i}+e7 >"~^.|`6uytlq# ެԹWN:QY@|O& ꂏVR& (`"k"QKiJĨ6xΨjg7KgGA7v0Y(dhO&0[ߩ SgP _j0U2D3}ӹ6ikxK>?8c9bߓv5Tש?\Os cG1^k(wVcMP ,K9N.m!$\0 (¨U1kQRk kYǧG%q!W2 Vb%0fbX4w;K}zhz \SN3Hq 2x8J!Z M'%>cJ$XFTHyTYYMW?c%/~wX p M\Hd5(-f( S<%&M0AXc' K^L +o\{T1^bA:.S.c!X6Cu{{"ǜNi>-0 lOS,m9`ܑw83ldT0N&b=P;'-_wtmpJ< eAKZY1G`_)5 cšF8߽ 햜V5c{īĸdzaZ,,)0ycoVWb?,Vnβ.2#+>dhtpW=ES2~pf烡`Üg7j$O^V`$ "aHozRP뙢vW@^PO%P0 *:mqk ( ɥHVyKrg(og./,8_}f`>ީ9RoW^)I, ?=[eaS,6 KFx5]^fB<_OQ.z9+Αl40w^-&wF/Bm=1{n놴wyk76q)R:!x4Ltsp=mDW זj5R^t04{$ú}O]jz^l$6qu?n6JPUQ--ف[rQz0jUtsVxxW#+@6l*\DIw7b|*wKy:(r/@2"4;+=ME)FbXb-3 wɵ1)A)|.W8l:HlhD^^ߎp;BI8i,"0*b:ЄJM) 8'rϕ;trsT;d˸5"xgVv&l(jq..I ݓ^.rE]s*S6J!۠Ln颔TA58W^!Gt(:ρ9@~UzA+EԒGE֋Nqʍ?19g][ |ޒE{#f\uoqJN]Yr-NxBGOT<`MD~6+Z|eSʯ|%X)፹d]!q0pį:<]ȉR4gKrYOC,8Xzl N=i) 9pϠq/% qbs(EexOJm#Y5=I6[gNRJ(!i)vUDJIC Sh|=ό$!\f&q'ɹxp`>QйtTIEX=bP<#D[oiWs-vs|WgWU~$BNI^SdgMa5q7AID6_-nE+@dn$ T3Oԧ(c:ۛ.7zۣd9Aooa% >|G8KP9K+[f˘P("#/O9b%PK'jo΅/K]0@CoUz"8ۂS62K%T]RZxb҇UL$@d¨8P]GU_JKU3<$Rf%2%1Rt FF qxoc۫v [}&Š~9eГEnV/1 "J+-@OшЇ/|bo=s* Q)RP2GFdAW J椄~6/blr_E&f-=8?? y9"1FÉS%X"a!@lh][TYѨܨ9wIxo^Ius7>14`߼ù쌒8MRusIFi)c]OR 'Ujɕ1W`,WYZCW(%[+) &M޴^1n*f6%~v><ԩ{cE\5M_GIMs2UpWiੂ(%Mdq(.{7qQeU W녵R>  S?$"R&CId"FktָҨc-B꟟HmZՠP`Pp 7c:_S/6y/I>t7\\ ~'_#ptRMRpʘ/VfhӡQywhB%_'C. gߥwi]~ߥwi]~ߥwi]~wiPN0c|C?n 3.#Z"WW1W"+cC2<\ͷ87)㾰c\4F~VEկ`V]Qn_Q9rM𵝔7Sg?~wVӧQ1A]]`y3Bug!(}?n( n |EÛLbFOx`5LeY}4pVp_kIlMTd3bZ 3ex[1WBU j ∽Un1[) tR*1Tl wBQЧd:TDuBeJHՖ: &k 6y4ѳp$p-FELkr:!Fk\D~ 䣧/D1QMXɳɹ~.97(-ϹR>E-(!(_+|MCG5 eJ+N4RNKT%|*oYw[Vueݍcjゎ&@9ؐʨ$8M) =\ ]Bƾ+o=-rn;6 o!v};'q*iqsP8˖(h͘Y8iE< K)Pu}H;:igA 4Ƃ[᭶&.iD JQ*d*ybKwt(%~ ʳ^&Ej+{lm/v݋]->Eï^h!LP> _,lX:4vT\-=y5^\%fbdQur 'UJC6JʺPR(Y;ܢ#3m*D%&'GI[S,eoj"nmR$vYP `.xw&^X 2Zì4t8:;!Cܴ1evuFr8.~:P ` ڟ=$A4D8eT0 .bFD C$Z@2>~?hC#9Lk҉"6b7x78pl0o]j͘ږ,zGe%Onu[KZby:AIG+:Wl9S^ WR:o·)*_Qc3MO %DRKJ2IIcp6w GHeĥV)$Cb@jqSB) YmB*9Gj1R iƱXh cXx/Qŧ̌[,i?}k/Tv4܍ϟ̈TRDfHiJ- !:ijR$7Qx \ ӎcCzZ|/M=G] <:@?фZK : 8hrΔ"ud֖F"d]nԸK`z`x+=#2 XQ-ӌ AEjzh^G_}R[fnc\GsvL_7Ol5vݧizµHGӞ|S½Oh`JEG9Jl$Dxd =BO< y&'o,>h\[/?NEgFHS_[y2]~Q~a p\0rOӃ@8%NAb :(!abV}nR=  HQin  !  x.KD$"HR0&X2J"1Z#%F485hQifCwf0^_ZW=h?g^_^Q.x~1&e<<\$<62|aT(\|`(0_^Lnth[xI|G捰KɚL"H"p"-;gH~D ?7w8&pqɽxs9Yw.l|:%. ~7^Ba\ii,_ܕ2 [n~EO-˲~3vuL,Ln5քB=:9r" r_"İ[حUݟQps3ŇWq>+T͝ 7(!lu}yrQ#6_bѶsp$|q珧:WMRp24W`|U?<~|-CBon'(?ŽXAߴiT=q-,CP}gk̑]X\wH8k{ι9twRԳ@$rZP3jzI? os<Ηj٩;.r ֧j)g!Jik6Ǯ4On5sa[IAv^5MI$ n DO2-ԕEnAh/D_{)>VбI]*aV` Oj"wm$G O\FWCy7"bAl _W=×$DQCqdՊ3GU@W2Pdi. `RD 8!^ElTx/ZzytZN3yzZf.Sɽ..켬zƇoJmRF@g)ڣsUܝ nuGT8RɴpJc)uā;K Ǡ|xGzXPىm춦V*+91wjc`gJ3QJz峩Ll7&H9h~)P8p;m<"#m Q\P`Hh;IPklyo)M;c/Nn}) t|5q)vs<VAC"pΈD7rC0Q('.9>x%mUYiMQ(qRPȨkBt:!mA=xFCڑC‰0GPnz6,k"8Z* ,^G.2>zf}<3M=&܊pKé q,'PY'!Z"ZbFVZrO&xYR_o8lM|w4q7lw{w1ǩ0ubRОp~Di˹ Ȅ^yjMg+(ךYLXnpDn3tG2mFG>%ݽ]w^VʭRg-GSz"\ІHE E;c$:G"P:ACO\?|&tޜ_*:@w8ggqT?BGEq8ȕMS_?FG㏣xi.~c=]xSg^œ۲Hc:> 85ݘ/Ӂ 0ix/O[TuW˘XOLX 01ׁEWv9,P`t|D"U<3w&S Z\IOYY++@\͉Jud,}O?4w͇mm{w Lɕl~,$I&2SL %:^ /X?7v*P?ƋDǬ&qR=ZfKw5GEBW7wnPG3KR/'('O$´[צKT]}Ђɩ2YW&\)R%T0H&7Zc@WwZ=ָ6eMTE6Qb"Lx HCD\AhU6F vs )RQLmrpkIXVBB3*=1 5K*JG1PZ%V>6͞ I+Aꢗ UO-hM/9int *Y<7 +[e6W-8Y I%K*Q.ɡVt>h_oh{4+,bkYPN TC|Paj[b.~@;> v3N8SG-QpZ?#}lΎy9=?4G(+Eɍb 1 3 ]kvVj\utO7O+ޯ}Hc䒅;PEdn ^y'iunL]yi9#Lw':ZO-fO.Ohsخmd[yɻd<}g[-< 킠Wb=/Zr=z}G`ri.xt#ۦj?tI|y>4R8ƀ̈}ys(8h6=\% +93IJJyJx8W PBZ-$h˻s?WS _}.&&RR0ΌSCR44s]ν( 2 LOsfK"qNA#Ġu+5U*)r뉔pZ%O*Rybd -e EP* v[kkkpFvW^]Pwotb) {9qJЋǖ,O GKnX(iH4y <1ٝh&,c-w(/䑛̅Z 6 Su8'<$bpF-"/ilg*!7&]4 \ : Qpyo}R4$ium"<~lon>e!#S x!м*)MIh"(k~\<{|?@oM¢i9.~̎X8U>L jxj0Mgm#(Krt$#; wh:CxqM9$Z*yDcq%P(7ˉfIn6'ՙI^Aq4S*-cl_Xwj>~..m%D'i{4_%-Cf#SFő[ 6B=q#NNV0* ֧ExvEK}Mul~:0M~[nVc)FяWk\~=klI.ahlf}E q4Sκ6O6^ tb8sVE'Zm!lu6__}58Yl -ypzK%/ۅ^r_뛓W߿=L'o߼BY@ kKeW~yjܾiIj[4͵ئihߦ]f>R,ۭ١_܊8Wx(|7A}j#o&Z ",d#_ _QQT X_Jh.o@LrT/ Axh324-{&ηk# sN+Rz|vm'ʥEyԚ됌OUw'Ά++ECw8噄D"hTbZ00 ~@8$ A=Vt:ښx;;|'V^\U}av_Q!J@vztF5urHZB(]ԶtR}GU\;Уxݳkd^2! 3-c6Ie$#.Vqފs-J2ݽ~ sάJ"NNjjl[ g,CqT94=c%+mzs?/Q`QN=BU5o ޹c ;Iy/ӝ;e;`wۖ[Vҏ$+JbUlJ7?J!yydqaz-=wjެZ`g2X+=|o^R1|?j֯k 9s AP wJ3ҴFӌkAi>uM uGitܶ銁4+5.] OU)d']PWZŒt%d+ @+0j2^銁Ɇlt%rѕF=:u%']PW!d+.ΠR6+5J(t5B]H+>!f]h1֕Pn=t5]`+ѕlt%.+t5]sͧ3(!]1[WB cIW O0X\X?dJGWg3EF? NaĺF1wj%%6Еtoуzk"Crf 5w-/.Wu>u9e)Ձ/g g] _7;wN/섽:e}netGNջKқܭ)#ߊ ~A) |x;lmOQE1me,oݐ\cs:ue{Yf*ѕSmG4L"&#] 0ltd+6u] F+`3یƮǴ`p] ѓFYػ|:(]1mRO1 h3MJpEWB=uŔUYL!ZNAΜzoa0N4v5ҧ6! Utoڐ4Ad}6\orѕВO]WNt5F]mIs3UZ\?݈34#l4-&VzH]BIn55*r銁 =-h0u] Zcԕf+7\tŴZQJZWcԕA: lFW*] UJ(ä1@3klƮֻu%']PWri[ؕٴ%+ aujP8v=ƒhQ%dP(&]PWdAHW1] nFWLk@+DtltE=Δ! Éu5 מ386h}Q*v+toXܪN 늡WWnSoKs~/Mn9_}w}5\;5LW?z}U8,n0e E4ESWQ5A>/fws+?[^tbo7q▩^sMKۂ }.;?}{fjϊCo \77͵haS]&|?ݏ\UwY'}F,]6rhIsv)XJH}]s]X0/zv߃ gH}t~W+ΌljU? س|qI>fԮWޅl>V.j9ioB^\g']MDwN%-6ˋlrXᯚMoGF(5Sҹ2g@Zvst{A [C4@ƑX Oo^)L!ak3Oq?d;qPl53>?n@@Zضi?a:2r*Χ>r*N##9EpїGN½#~hTxsQĪ<*nARNMunL3F{G|f780Vʮrѿqm. Y`{0?/%uqKiwWK *ۦbI}ݧU =j]3/>0\śϸ^}VVg?mԕQ2fe `7/kb>_U<ټ}Y//~+_w J`Mpt}H`|)+9;_~$cvM=w1V|p JF nM\H˸N86%Ghݜ~%AEVY{i% (kUɶ[ϸfv4D2h[n[Xf=>ֽ1mM9Foz3S[c1̔J%@2[6n$so<#z7L^Ove&sumI8޼}RuGVޚhOG}:"z^,! QxL? >oq vo{TM]}L_?!_r^iwqH+~KC'ڷ_(?Bvʁh{$}}x$u2sɧIO2I@;m'vy,] .a.bZw}bQnMkt5]u%ѕlyXjRוP5Ф'ѕu pltŸAe+EL]WBi&]QWΣ9銁}gqfƮƨ+-)Q6`sѕN=\וP⤫1*xBiE] dD!'B1RF`ѕl -%dP`ճ~ELk'2 z >mj =Mj KH6Tuv>E/}֟[P1]IC0}_Wdۣ <lic>+=Oh Ci(uE`0+֘ؕdPh2([N]WгL!lB:ӉeVWh:Qtt-z@g+@d+=a!y]1IW#ҕq-8/%է &5XGmeջK6f5ʕPu;uAo?9$EKi--Ytr$m匮]K%{Ww`_8 zk N95}*F*eTFi=TeQzպmӭ㱗}|f]:4io z MLp9Z|L(:g#i JSF`qT.m;ᤫhw}>Oi]1NHi&]QWV{9=c`|:V4\t%:y] ԍQWN'8FWk J(t5B]qcN6ػlt%drpJ(t5B]crbXer,] ˦u%!ΠPn-4t5] `sҕS>b\Tٴ zz>žEg*gO?jS?{Ѻ02$ֺIW=M9Kn ?Ml眒A' 9=>%%mN`0\B)&PZ751F@S*a6y] ejLz]iwÀ]FW\.bڠ|JZWcԕq=d+'?nJhOg0J?jΦe0>:cJpm6cWBSj&]PW^y9T6b&] J(=N9M>b` ] .\t%J(S;emN5]1lte٭atɜ~bL-|wS3;G\3[]r奤MeFNZ7&. g_렐λcesA'M@.{wDVޝP'zUu . /xy92͇DT76/J]W[N"^esw3\}%msFu.VxgśX],㓃F!ɺ>Joʺϫw׭+urM݈{J,P+ڼyZU7؊WS2eU|a vϹ l9_h7vU]ߴP$k6 Fp~;1_5M]q ( 7MlP *ij.%]1 '&߂nkmIG%Ai34ջknR튥KηV(ojlm`u-)xCZ"Z=MXzv+3c耣ۘq¶Knq2jUrN~v ̆ "ޑd}-+bJT <ݱ6l뵗6JMXst(erQa>I~ʪZeu6GJq M+썣E@x kĒ"fys$C+9m(/ْTFBY\_96+cNZUWMubzk=Tj.mj u%Tu1}Xؖs7kc'Bl![mo7y5_G.Rq-qҢ'{LXʊƷϐ,jTlpPeJֶqu<+kύTmvr}JWF>8 VFH6TfcHXUt 8j΂|wTnR}0TrWܴ" 6XȰ) k| y _V 5ȤuՅ@ 7te GWZDbYьmV Dv**}7X!?'?dѺ 蓥d}E>Z`Ye>: ]{ #BK~u)}=A}L!%:.x З9Ϻ=| $$L)ڽ,C)y.a3m>%8ZuIBv( ]BZx _3.D[&4}N(Zl,<^ `D td%.(AIb&#Et4$C VAgUkU;ypPBH1dlD/B`ƖvĪMPk>؀M;J:BzБp64IxF5D[Tjj5tꭊ.~'XHhjw$a:J` ئᾠ_AKCjM.1ڹ/>mg]]{ ICɗ Us&dd, D QhsOkGi>cP Q!D'lajl}5FC-6(# VT-]a-lZj>+Bby4CS3Ap((=FMWXOQptJ C\1Fnj5;…q7XoC5jʅz{<\"AC- c4&ʥCp$n҇n ZV Kk|\~B:CgW4*<!GR/ɴ^gRյtFJA{d`{lܪ;?}hyΈj 3v@)-/={M^`&FO+In4+ d0]Htڇz~(kzLW_;xoWbBrG4yvY3RSy=g"^%e4PǏnM JF,(&P״أxh͉I.`DD礪>hZ؛] u)Ik&QnAUULb͹^j}~t [OܲT\]֧u?n<[;H:}ƖNl5(cǨg~3-V.ж$vZd 3L;-co}w^7\/ҋ)|0&y3e+:lmk㖧}w+d>߼ԭVVl6/E}E6AZdsٳ7C|EΝIu1վFc\g1m8zKr t1P+C+u;]cK1]!tet" Bq(tEhP:tuteŐ=O U~(tEhћAB ҕ3N!+=uEp`G}N<=LX"5NWIUJ5 `` pC+BkP;t|)U|/]z0tEp`]Z#LW_8L8pC|m^hKEW4GvfAWk^/q?b[w9B/R}cy.~%= i4Y2'[5n+N|ڇv7G,)GJ)cϢGT6n@ 0qgoôڇj?.;;Ew?udm3tߛǯ~PZ5~Ԉ.^U{q1_\[>+{;֯/ӢG~9lC{|.AiJtQЙjG7뼒Q^/{- I{+zWSTƛ|:˹~6ۛ2~/7oΏz{j䛛>\g_jsc;׳yo;>>=]9bbjyw-n6>?{H! a)TotP&]+X QS!%H%78D]w!o [WۛYޘ3ѡLʺq쓊UPnBM*mE]/ҶI\Dp$:fkoj|"i*5/oqu3a|b/ڡg?t 9sK]|>߯M_ޮӴx1?oW'D+ pͽwoM:WspR)7Co7tyS_^_?M'MX_i~Oy(+vgdTL*w&oSspqyw۾|ZerX;e}Qx͡; &sXV"=B&OdFjuR|Mڷ7R=ڡCm-޵6\ٿdw`*Wv3RfvdrSM @IlmH %)Q24e[C4ۧ0%#):IYʟ)Pʍ"VFudXDcZZ dϟ ~L:D~[i'| @~Ao&x^8YKWb^7%lwKPŬ:ywW~ZI:b.dH$K?Xx:q)O~$CQ[NX>jQ(R" &Ӷn:qZ*.BV ^FcD2Y+냉KM1H0;HgO?TYy5OXc7[B}#߅\6ݚ]|:߀$.-a`Q 23ịUh@( F@`p'I2gp6q<j6[ ڳ_߷(w[=6EcQ+,K"ZeleS4fRs5 %h% Ζ Tbό `P~%fPv:A7(C;GFHLT" &ZJ|ԸJXIc9NBPdaq+ p tk.D  -,0vGEwo> $rroDe?Qy (+*xjq1=}\]OO—> Auk|'ZEu>AI[*Ntrфz)mjolo·)*QQ,J ZϣJJ^Ei5ְ[ `E({B}M[pɭX-۫D ^`A Ɯ qlXM3B ՀG•Dml~\Zd J3̮'wodĶ2@T0lJ ]IljR(s%q<8Nt lZdOP` yIl;J TrlLSAlq*.WyJ%LdSȻhbI$;0ΌT`Z T9J#2y"88f`!3 hG9AQNq{>&~(ߡ@ &dB\rߢ@AŪa@9#yÔF(e"E: Ą3[lD% H 3S)wr.q#(uMȘ%"Όxwi85\$R 5G%+` Ή':p4?ٴP<$GU^_EhȽ9A6n~'Qgi ,;yGKjE,bT)1+FapzN)'q&,(NjQq9mL7,OjX&P}_}-3 DɒGw0'`I(*1'H%G"2`|vM<2v"5֦eAj}vAEPy1`5"=ة12OM[$J#%X)eH`v2!BNO  08G#TTR(J.a K>zHe,Yy$}*X{))09)wK9 Q oM{䇵;=Nyp5ENj`nNf穻5-vwf{Dpn㡦W=3*Kg$D \k匂 &ep;&v`sZL@ÊLvYzzg@<X :# >R+bLǙ`I{cɭ V{"83 OItݸIacdp=1r>?sTl<"TYzԯY4n16sYrltZH.9IQ8 ;MUV??/fM]ݛKſ]g;a;9[ٚL"#nK>L7CGP>J3kK.n4glmog&ݡtЄ]3xǽ&-fy6HN!V72S?iݨFha7Mлxi׶OU.fWfskBTLx5|׍CUw#6a RvkRϪ_@*qm3ѸQ Գ;- ʤct}$&7i_d[4Eloևh.'+|L7Zwo 9dӵw]h$6xGaD77;7\oSaͧ\siJ}xOw7~wlz'k3^l :Wb,N,OۻB7ޏw}?C>nL=J%Kx"dS-s wrt!Ŏ,Bgt;$е{fzOmKؙN= 5MTI؇OͪfhvW{ۯVWm'6ϗ"$OU_M`Ŏ.L~eM\Mq|忾IZq]~x-?,`^4 DAtnf5K8ƾzӆ Y ?@ALpm n mOvWr@:hCzSk?j!E&Z*;jZnP/ESqYDE* ^@Unu[wm,.WMż].WEG_}͟a9Š0v|ލ\aQ~ LJJ-Ѣ2Y#!Ö? .>!{2J'\N`. P0vY4a#ߜ 46 rvI߫oM?Ogs1ɾ\}={߳-O~dReEWJl؂[ _l COj. &cj>H:2@JF/tJÃ&t R&gm|YVh@Ҫ[z6[<&zxTR:PT (@G=%jt▱,3p*Kuʐ||CjX<%&V5puWVGz<$*8ʝ7\ N:jg>sWLLTk%iʳ}uW^O׿&=BƼtX5{}i~δzLˍa[gyXDTSκѕ8R#kl-%k^smtqO&l^A@Y'_7 i 'c1ewHR J SLHR0D6Fobs 3vVE7,ބٴ!{t}4X>߶]P@Brj'02K׭_@qten/hn8҆kPn0lu^\?Ns:K< b㶋^ka3v?zz?>bj @ƇW]|h*ȺgS6Vd(K)|@ bvG/Sm1~ڽy'{خwkx|8GJek-wC5oyu|yJ1=t.tkGY{MO5lX1Gx||#_Tffi{6i[e}ѴSSR MJJ)YJ,#4hQ| Du%:8\`s*S=ڏ~\Fay aa;V\m xvXWRdieu&Z%j("*)|=T %>l5!QE^MnVpuy/eb{[ *`,=`iuݳf%Qqό=9P+*ʀ5f2\F+Z9%8Ge+Kv"Uy:0A[c2lѵQE0FDC9Lx&8L` Gr9Wc!Z(Bv.;lD*R@O%[EIkdQ^ 3Cf9|!⡔sMYN.x)AbWzB%ɚXFE짷5141` z5ON}Daɗ Ww_deEaLS3(uG.*Y"j#dKl= AXY{Bgu.%gk˾C6—l0ҦH)8;ld@}7[fZ\ѿMgl d6 BNTlxcAc9Y:]xA.y+W:ųSV5KEvɻm-cB]"lD,HxSP2d<([ UVMq!v gـ)?W 4n 74%^y]WP?/?.=;u.63}c6m_ݾZ\oB''7Wǫoul87gӣoíIsDbgvy[>OjT7eɝ{Z|{2zu˙O}7^*f㬕y;}z+Αof)ïx-Yn$֎d `0ڔ/.gT5.'^^v<=?yP*7n5tNkFt:0u~?͓2q,r2ݿZ>rQkN/O[^f:zoׯ{?^WkqO&Z\mCOڊnd24y׋q]fǚ3c4.ܙi~}9=nf'ӮiU*]_Ib~>B;N3T xfBz1]z`vRX O}Vb]賝Suيpm$#QL(R!YEյ]&Ru&ҔeqfXT(G1Hv6R^7Ct8 /N?*"Ү&b!K!^k7g8O/ *ߗH=]u?^7]SN9gXމe8jG{Ҏ ܁vtzuSϴg\X#6g(b"ҝ"tՉE0nTQhW*Z%DGSNlDtFքgh@B4dU> "g彪D@*G|NJŮi)JD#XT.g78WАTgF]GoʼzBGt)?UKؓ&8>{R}I__Ζ"":CZf|![ 0:r\MRB1wDoo`hYl[bHw xYL R& %Yg^;Mc](1 Q{ͮޝ֠Ֆ@k:~Y=7{/"h\LR@.z,U>Yd0lb-EY /]JZ%mL`."ɤs}Vb 9E7Ǿ |^Z"*g}\cE,oʗ>3vaM] =fbڴtۅZk̫>nm=ٮh<=z hL טVV ^mE4֫=z 4a_tҧT[-x=S!(K +#EW0"bVUL^L 憒n8nga9bJv”ePJvNSɦRbCȻ~=cQ Qb|]0N$EI9 gd,DsRg787o˳86*fbF( rICc"N2-![$2d{EYx*Ȑ$zbp1{I1ILI=1D4sI6 | H)h j93)DIǚT wJ!'Ax_ADlK) 1k`#R $޲f"JQ"gY]cUY&G 1`Q$cc/Ee h[L̇A~N/@0K:3)}K eHXL ZJu7fd7#SvG0ڕSPKB[ 2aS(RIcTJ/Q)̉חcm?k]ǥLջÐ5/K]B|PP[G1ldCqx Qx{ޅdQ/ff [ >OtGZ~;7{o\3Y.V*Z7, %uYBK nh~+thUEHWHW!7uCV誢}i]p{(i}?D"rv-]1` ]U;^骢tv+6!bF`3tUf|W<]U#]}1te8H80dNWo ۳j#V %aѕ݀HWϝzi,ߜ{Φ}2)\<@81+Z+NW%!ҕ l*\ UEko V4jWHW(-SCtŀuC U+tUѾtt(#] ]t!w6/z[Vd)%UK 0+Y: ]1Z%骢]_]2վ` ܳ1ZLPUQ GzKEB6 ]Uh bC+cqŷL|fjߟz^튞cf) NuZ9ߙs).ZRVӹls A}.gJ6^vYҙ(\ (P!0 b,Or'iѭM{]+-l3d\Vlmҹ6;@ی54-&) wA曡5t(qt%"]mѶf~틮*ܝHnEW։kWC++L4CWNsnOWC+kV ֦0 1+Z<]1J3$]!Hݔvŀ_"tn*Z¡d{p+ոECtŀ W{EkOWh $]9&oKW ( ]U 骢\1YGjSOGJ'q`wg=H -I %  Fz\BO Ys0Zβ%Vr=c( ;'FRZ۽}ՐQvt [n+5b:FE 8c(@]1`wfpBWITQ4[ ؉fjUE]Uf"]VjGpk- :]UjC+@lK+@ ]1Z)骢 ʒ p]3 U NWHWCWHmKv]p1+Z<]1,+l^n3ЌvUђ:]U+-F:r U]UnF+*J=޹Ƒ$Wl`[0`bl`hF^-a)QKjfwmD8#5ŢzؕVd}yDUe#WF?Ȱ 8"\o|}=ҝeneŠ,+CO.]@5,yY6J}gKv-HnKfa('i6Y4]&< nKB^gaLn"FMTؑ:S\gΔXaZFV`uf%E;\8yJqEVxrev1Jd'+N~EJ)7wur L$WG2\?Rڸ[uLk`'`bTȳȕ B ʕ1R;T\f+]\2P*?o+yD׮vm*or$r%Iɕ4\7EW R]}7raC!I /c6ⅴ|; )ܿ\\y\]cM$W~z%W{W$,MvrE&W')W.'u_~]ۢiZZYtU^;=}{KDZziؔ$\bMd_0Ěξ,!,eP>B(o dA92F}7^=wۖz_Nt߇ˋݏgWrx >=3W]XoNt{EokZ8y-S,g"g\8sƕ LBO)ro=LUPK^X pؿ jA|̖qŵn{*e=A{6#F,ra2MNPS 3u q:腴ѭ]MNGP -/W 4rbg+vRJu\DrD4\W4Š:vR`7:A,. k:3\)Ok+ۥSEHO#W3zқ~vur%.X])pT\Y L+tfFCO/G_ccE,Wh酔iemrءewlY:Kóltt_~a|mĺ\Yެnקeynl+c Dv<Y̆R,UҊ[{JnUU);WngFpƿBQNE|oy&3aV\kfieYL+7LLs02\y}+ڷ&v% ʕ +=\~pR\g+ irۭ+`f+\S# VhrJIi 3=d4r4-J)cӑMp<\8<ŠsGTi]\)e \\sɕy *Y VR|sӑ+!.ˀpщ-miь{-^̝ kz{P?3m+Fd ڝ,<;PfDkJywcǰ) ɕ{F7YJiŬ]@&WO"WX;\)piJqSE81S+'D{\-MG3RBrś\=zcf+#W1& h9^@&W W?f%>`!Ovz̳c~x7b("ѴBC}}sWիW7A}[.ѴoRֽ>?;3kzA??r?51ok42]}DKowM=3ovs}{npV8tg|<_Ϧd=aw3ng1Snuk_Gj׳o Y7ī>,,WGy!Dy:{?[uqQ_|D|Y5g`~} 6CywpO7_9]W}Zw |ZNjœ[/WeK#l}B=5[(ؔNGHB?J)꙯O? Y1Doh/BrqA?pf`..޼koq]vd4JHlqp%OA^qԼ$zS+QlQlHBʕfv/>R`ȍhZ[F#=/wBjk.a I;l0b!zhsvI"c 6TI ĥU s,$(ba/-6\/޾,վXڊ7D&f@&Nwr3D0'Zpϭ? Ђd0k!1A7\8:GjzQK&~ф`v^^۞&Shb(ᇾeoYF4ن06[*e`03!M YԎƬ)*#f|5~kR)}08~Wz@Fq\lRLQEDyJJ>RcГ ͽR4 l5'H%5Sh"7JjޒCa%t!Bdʝ]N]|)c##[$33dT JKoCšP*T )%BmMc/"D+Nкt\(-41MKAbZ99bNTчΚaB71hI%3C1K,1 Lm/uAgаH! s'# mtG|YS8Xa&Rłb`' h.J:ux Aka&xv~Ǒub84 ˮ>д+qA[]^p0-#1a DYփ:.tl1 YZСS0Wu཰G$l@6lmbt;c̡m+"|lٻ\Uȶ}EIkuƆ;"RAr䌫 sV *ʄA G:CBG)*(WZB2 .pLj`FB啳یd8@&>. NK@(r3kiA;1mo  0 AdE!2FpWk*uk)6B`AbBGA' s$00j쀩XhC%;.bFhzy 6XX v-U⍁D\`}aC8BtYg"[H(`:Z@!N]AA-]u Rq&.Z,}PSR!}굅/1JOE5ڮ=ѿ"Rս#pPB=$l-@eB>i ҰzPWEՊؤz) A;^,Kӌɉb̐GSa:& ۀymggL/ko Yc-x V11Qwu`D6# T 3 fypХ:`~tlzU2'W",]͡VVX54<) 䄸G_( "򠷒aHDNkV2_twf:":{:%+6ò& v<oF߫R1v|>XTuNv#[Qd V1F¶l:e:A4D~X+0__nˋs|}]BdSF:-}z6F$4=b.8^A*uj Wehk`)`1 E=kY*1gTFpƨwl 䁄:(`z=b klnwSӖ31bex h)D{LwTp=PPSAA0V)g'Q8`==Ua^8TUh0/"X$@u"mC-,\{ 9t04Ց%D;q-ďئc1I|OU:uo+KMF) D)G =V%D@׃%Ph;(f Z)_&zbeLWU%7jrtD>$NG̿YU);Oi(ljS7ۏMP~πTn6:u7UxU dehrVLzDПﯪsfG*-B|;%PA+ JQ J B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T}J f"cR1c>~wC+oP [Q.pJsv{r[<]9c?!4T-zJSn4q k(]j˚ ڊPZ2nLy(C yVOOO^Ĵn~V;5UeFLj|-U58.p&>xn LOfTڞf v^od\1͇t }29jά!na,9Ą? <߈7qWYZWa}: /D{ ;5B(!81_(uu""[CDkJ|-<%Ÿ Q`(F0 Q`(F0 Q`(F0 Q`(F0 Q`(F0 Q`x%Tl_ih"RR]3+R 2^lշk 0@\9|Bx7_<$BY1]A-pzo}z+ A#|j<[ 'gE -r IJP (T@ PB*P (T@ PB*P (T@ PB*P (T@ PB*P (T@ PB*P (T@ P_fcRp@G p6u%}lJ @i(*P 7BB%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J TzpZoUH~Kiu^/O@CJqWaq6o/az^ĈY\Mw{]:]J ]!] ʔ,knGCWbc,Pʐ$-Q#+xWƻwUP #Z0f4tpc]R=M6d4tp97 ZtUPjVi|4tUѬ]Z1NW[>ҕeh:"UF]v%껡+]/ g';=>U0x'ݢ+qHWzʥc+l ]ڎ}̇UA >ҕrY3~j/>6x(J9x6PQ9ZPbP"NݻbLh},^ABedKx,^\ ެ)'tFAi1$G|ZtjGhYjohw J͐b\i >dWUAw>,(wMtMzLteG{hz,tUwUPn fNW+^ v@{7O}'J<5Op'z+}'Jށ$սZ%<aqr#`Ԁt;>p,/K$zz ,aUۃOoMŎ|29jάE֬k< swv9p㼶޵ܯyD)d瓳M ዞl 9Dtbq6lwU_ Pw٤ZRh/*rptp`(;-e ʋƜZPkE쨵Lth[[{ ?Neɴ)F_d/BfmY*/3JJÕVONPv'z"V>~ck7=ۧ6`ڍCm=ҾjAB??*ɥ-Pc,<2pȕyT@"q9XFS 6skᬪfa.Mj7hΫ, >TwSAH[L &NWM*NIQ8cnzY$tNTZbk/3+ u2`#VԳc@*B'L939VxzrDS<(npz×r JapqjfZnKFKlwo;׹?qY gU=o1)&{nT Ǻ˭+Ϧq۔xm:T!5iKҹnʶ˒Y:ZhcΞUŨ7|mϿ^TzڨVS t)^wI[u'/pe'?٬N*CrSUll}QygcOMrK˖-^͹;@k(D>r1n ;S[Vڶ͹ gz19:.9CCith"#{%a MAq(yIPf78'@[Nɥdfo(0 ༷TDY&1)ϑFg3)>1b_wGe 2_7R' Ih fX3dmVe%ֳM:ѭ϶KFwRr񦜣s<˳m!Ʌ5N5\3}>i/'-W/˞=_xj^\C-;R)hH ьKn&& [_ڔ0sIX5- xu|39l{%5{BtW #)uTR'+ +li"sT"jmWv۳[-)5ip[AHR %Y{ `-D ! ʌc=eL̜2xZ &cEBUQ*@RC3`png|p0˸/ځ >*n5g3>Rw:N7Xf j7->̦;glLyDX))x 5ɄɆ)^bA&?ePbZhqVh+JDlb;`*L63`pnglǭsWX{0ֶdƒXND !'ZG(NJrrE%Z19r!paJ"d (:kb"ALP8Gcvn`> |\Nc*ĸ+8}ܓ:IImBQ`hT"$j"@ "VSBpJ>(#ZC QTڬ4Q x#*-D8ST30#vFp=.uf%E:0/REś,>&VPQʁSdRZ%J$bRBP;2T 4c`q_>d!>m®.kujI;G޺8=w90YEy{]_VqKۃ p'Ssipjd-C n6+ɡ?^,Wm׉%qL Rev])e]Ĥz7 .eo:Z#(%G9%RP3[ma4_l'wO zK܆f}Ejs7B ۚΛMȼ*2OMLkW_Ce?.V2'I hӄjU6%³!IIx/|ήt)pՇis\[ \\fi3uvu fdQI<1ᄑL:u0 n/FBt,#,J91Ra8IE)Z9hQ)ML Eʜ[.ʔYŜ`^"x1ebIKB}\|h'Jovz|̦`xڬA.x{H$ %"E&8L ?BQO=#."aFm$9?%kgUKnsJq}=ּJZK$,{S7 )jhQ2+[E@c0tOӜP2g4]t4Py#;`1`LbFcJ&Z:a4*["'u?_]k# [g3cz*qjc*G68fS I<:=IpR-nz/zʺ-:tv_Q8d0Kґݥ- BK%xVt?K7@ -RHD:JH2F J9 .X41)ςsP#{3p3~PiZVF,Mo(^ MF׃Symxtu?}6p~2-0&q2=!Ti.x8KF5ȅMPFr2 &1]glsyh/w0¶r6UVEPC*E]=$>ẙp痳ڏ3Neh7; g6ߡtPo9H L·޻)4]M~RӈHvԼ/l/ |k豬˾{ol-~ιj@頽S~ӹW}{_~/]ۧtM;h'[bH o ^7_4qWw8pdrddN๎#HPpH e7"&OF!2-X_z&Zl rvWSl.FEfx^vB`8ȵu]y7~n+˾ijzV]y a?&- Xk[hkl{DH8Ҫ^tr|>ߍէ\2+ELJ0RiFuUG"'NDN橻$Kyb(5i&M&lA8Hl$ߐ,@Mn c<5Tq, yWuH tgcmOixVg}hՓ+/ n[)sYn8ڞ\0 x]1=^}"*e3a8m@*Q?~*UGg@P tpFP˅NP=T=QtħHIH1p%*hG$!j<̉f1(ɹk. s1$I$ ˒Q0&<;!<^.&n2o~5C͌Nk_1g~z .G1bj5_ils!F;Got>0 Ġ DW0h*e>9iN~EpDH^G!"F!tRy.ZEU(s5xFM{ږ'uV۶pޞ3]PٛEKώV\z[W)&B11 "p#KL8(UȅE#5%Qh֌IEJnrZLg=FA.=[]6M ,*-mm9L`xCI]g} ʎWoB|W'oE|Ϗ~J]ٸG+8i9?I)V!qzQK@%w!zV;5q!Woڸ ?k.hkۈ6ih{z,}nSĉŰa^2ǖkZs||Tm]E:h.=7E͏|d8m:hL p|ANn#$g8G7 h'q՗_!V&Y e0#κ*9*d$%DW:D~C$.pPkF{ 4>B"t"2siܑkY\hcT21C3b NP"R$ЉMr )U08- x7ݼ VeXQy"B@Zqf3 },%ݛ64Q 4=]~Մڂh|Lkw{3M'~\+ ϲsff|>)ϞH(C*G@TR*]&;DІp)磤]4c/7vXxr̆b68S=C \}4gpag i(r?|9mvDs)j! ihQrg ^# :ˌ{Z0#F7ΚTL!.zTB&ǣˮDTe'9*pwȲ !0JY~cvK|ݠYׁ)|yl(# T1cc%  1n)"8ЗoIoe>P|Ng/AtGޙ N%bfBn4xCi-A)dQJUb5:H2JK"GOGB ol _+(Iw6<{@ ;*yh.myRl{o_GKmK'u7̛Ez΢ⳮo@v7ojYZXSQDH*Ғ|ȆZKEͥ8'L8ph4dC 7R7~zNNҘZ(l5{S:cSvPJ/!!"RbM$2nkYFnL1;>S"oAD!Q7kv.d|O _1ơZ?Хf$VʮGyB*}0{g  }zS[ W78S|/y!L32Uk' wD9A6̻W3W'쐙Qޡ5JIlVN{ "r˿]QҨG%r,!* ʭpp܎{Aez4$}X9̯wwW.oB'goudx d*>ti=G#òiON|壜Q6@nrMWOk^]TZt7[0yOGz|tal~+Αtӧy*74LC! aP~uzmV4is ?E^O^==N.hb]A}׫F뚶r%dwr5GƚNY ƾܓLP/54&~]|tI::@/DG߾Ƿ߿?~o~/oEIwXzo~zԭ~WVrkV_>ok{1ܘ=J%w>z7OGEվ|ǃ7ZWMXp1]O$_}`4?YQWVz+!"Ff*h6C+F9++7{;=~_t7]k.]C+vJpâ+ӃHWzVttqtȺ4,LsվtUK+bVUE)W;Fh.6jgfQ:=ի+{NWeo Vlw?j`ݶ]ّRITCtE}Wzh$C+F=/_( }vz\3̫k,Ϭs:"քw9|˜F+V?ON>^ϸ*q=+![:%D_pX.?ԣ1oi mh#l "v!&h{R/+'^~J/?S!xacHZZk|Rwq>yY Z_Eo:r89}ڢ]a!> ?@0{hz\{LC~ g3ێuFhLbC,Δ2^نJ)+jn<]1JGAB/q`stE!upl9D53!"^Ypjsx:]1ʕ%`ݡ+VhƑ ]1v-n+EB?jTWHWV p-m`ՐiQC+FiG.ҕ+ߕJf]Ҏj뇨7d 2\ ]g?!tbj dx=tnOh-\-%R]|bexy`u%hpxZ<FY̞ShVwvh!}h B` G, " \+thM"F9(#]]) N{PE ]1ZC+FFutEfl7gͨ+Fnt(WRqt;thi; ]\%U+thqQQ]"][2 "V4c 2ZQwte%+t;vNE)NWR.ҕsLK Xv%nwhw % j{6DW+!1ҭWWHW oj춞mm\/+߃HWzƭ1@K97&`xf[W6tǣA hVjc/@K'm3Zz,(FS{Z(]Zn]KXgo1JGl3j]`cD3tpn8tbC,E %uŀl7u0xbn]+-J]1`]\-|+th:]1ʕ಑vD#tCtŀ~\+e+th vt^bCtŀiIA<]1JGAr[2 B ]1\ ujt(h "]y@Z$^Cdtp 3gnvBqBG' ~AlTLO =n)_^O P<׎tJ-4CW A+th:]J.FIծߴKDO'g +w4z = zɎ5GQzz i PZ׌ Rgz_r=m/3]H6A&} Ȗ"Y?E,unkDZxnu*~"YX| } cAEDWXxфDV :]':Bbӄ3fGCWf1NW%O38HW/7ߗ `+{<:]%:BIS0'4p }gQ+RDWGHWC4(#+X]\EK+ʠ(ҕ+ \ ]:+@Ip,yt<&Ȼ2pu4t@.ҟ J62Ž'H(&?CK⇖ȻC)*°]DWU98@vKxS\l勴%=#w~Od9zK є+;>l$dKV'#B1argVP02eDgGg#Έd48$DWGHWTJFTDteK ] BW(1%dDtex+WFCWơӕC<Ȋ `q4teh Z<]R':BJR#+^. ZC+`Jj(0GxW.!ЕA˂+'JipLte ]OWR JkEL+D82hu@J]"''H+-?X{W~p+?*釒ѰxIt\c&"BN\Zlh0^A$֡CgatB\e](rB ;"4:à8tà,GcPİ&<2pe4!(x2( NtutŐ2& >|)2p Z.B+RDWGHWB+$"+Xh R,c+ʠ mQ/BWb2 \c+s(ӕA]#]I%ɻ2e<ޕ{Å*B+2QҕJ*]`u~pw_2(i:FS#+x<% \FP55PrV_]QGD`o` *tD_?40tE]=W+]P4ter ]NW≮`17^ *>f:&Xa$KDyfɪ9|TNհdY<4yRT#HЇo3B8?B*I;&Dv"=ܷZ%C JRtvEDӘ `+WX"pPtePJlK2'M&7om!a>Yw~\o`9XHw9;A,(Rd}J9`ϗKBY݌^aB9 ގ'Lo&׃I*-tXMaNVMJCd}cY#zX: "syc{](M&p-?){& !luSܵ|xrvYN!nɉCn 8.>孑l|1~bad1/ {~mx ,/K`W܌p״J]2A4,c a]Hmͫ|Mŕ6w䈔;I{W|N_uO}1)sr舝g A_-ipn7zhxMضa7,7^]AfV´4A7h4|Bfv;wps-J3W7\7 u疹}tx==V|avwx;-*+ f4rdz?{`&ywbp}%`Xz83_W?x ,Z;=A;WRk-B 'Zr zeUSJlSAyy5]|3ZyXhۤ(PlbE6(2 SaPvLoPa2F6 (E>{g_>4͒W&4ֈ;xa/@aDSmCKn{D^~5D`6FSkDe}K.0LUM,y=C-#y sߒ[̇c eD vue淒7W*@ROmoXz[ r0 GL'J-:}OL^9r8+oF%fm @rt%pr`!{Ӫ%cV܂ Ÿy! /M in |],ԥfT#Ks!FFW%)+uUOEEA)acFw6 lLiF76S!l!LQc,,K-˟mRKy`r:~wlin̈۷3 nS7|Õ+ 5̄@CDUM2%+ Ci~Mf{4`{]W;|O{^\#e̦ȔƂ-R}oTclC^İM8?1Vl1.6"Us]j[ӕ,!5EUqJHk00^p1 Ƙ ejqA2S ;ky-;%U{o<Çg̟Krh{&ߍ>B6w 7هb2O-n)~ZL !Q(貮^P.+'A簣sGif54i6Z^Z*ya׫g8ҝ/PӆիفჂ )]S?`TM[xAk!MPMy]Lj"EkBjU7$VXF18Y͊xz%ZvL%;fϯ?4T?I Y0/_hz盏6r]m]{9D7l. "SZ`(T Z~%u% Ga(\8K 2\:Rt`" / DұYX'K/DN;$CMee8Pd ,U9Z q,6BU;#x|^a!ڱ+Za;e@?z٩&ɕѬB:A^8Q GvS\r)|}VU@eږξ8泹.\.—fz߼@TԨAJƳk JNe'>rqIqrru;-\0*-PAoYm- \)sAP)yɚ:oBEMĚ4bJ@5jx\M%ρ>ỏYWpr[·}۩67{=D߷ӧ4[ %5͙VU.*H#kQҊ2p \h4$%P|PngH';,wXifͼ(J&%r*T,NıYtwr1SH zJu $i{2Ht1m (4ď7e4LV=}qVO`,}ܱ.D+zcY8?)$.MfQ6sU[u96z2f57z@v "K0)žF) vvz}J.uň</-T,;&6W|VJSH>khR>T F;Zb\' (KGf\Hu&v)XYrOX+>z0)~fw ?GǶ@0:XE?:=DxW@/YY@/6L;1k$09)Ʊ8 rqL({n*hM <& j|*Fd ;L6:^$-LOTϝ=('db5 χc{uQbQw׺ysG׀8Fiq b=ˈf?ɴX<.艘afIz>~omEY߁<>u1ZEU6fݴB4 T{VHo/bĀ|SY.ۋ}鴕y _w2xUdBvBw9=sܩ!ѯgE~v߀]Nw]Pe++m_y Cvf2/3Ue˭˒v/K-JXQ AҲC/! p{>zව`<5jDtGW#PA-CNr<_sγtX7Ts^ 0hO=?bLB1Ixh'w=Eyע# (6$( N! JFFKskN`J %~(!t 5\0[>刲([&z//4d~<4bME{ТBjʨJ^|C TyYCbVSJUm DzJF𥀏kL摼#1 V/^[RQ E PbyۙoG)+UP7,y˱ԏkFdY\ ~%8V/ui`>zu|"p?& *)3lC*xZvP2d> G(h; 9OeDPaJ<2#)".ڥU"}\w 5}GR>$1fX9;&RJak ]phs⡇KNi4օ_Iny8%5(̰=Է!(E|':dR=>ŇӹU((1g(,ceWnW>_x!qI0qeRC[5R'@\r^ids ӌià n [3&\ӈy@V)ƃfwAg<ꌤH$N@mz{+/t/\C",.ՍKNI盐(F[s7N?vUy.hB%?a^pݚ>U29f(y~?zm#= (!U0nT:|\/4_b+`#wJLTEE]mR]@ Uq-UU-Oeɖ!խn q|'CމJݚko*}}3/"B1|U s&&OA=1[d^$-o,K!rZ6mҶc^ϢMA9Mf~H#Ǧgvԃ`oc0iԨncPd(rnz5u7W\,E(_!Do"\21o`n!GWP0/sKàq[ÐITP(Ct X[SgNhU]Q9W8mmj$&CqGBP1kJ)cTK/'38xR *%(XNN-YMy"BV@|/3*Fbsx'N'AAHb{:\wBdio^(H1`hr5'o G,=2cp؛`Y9:;1X+́I_uozR~}S;+fuԝ98ǤO4óu^hdtUD4.gi5T<>V 3  1GfX&GÉ4%xc5[Wk4)01T (0~G Lf^$)S-+ɩ>=n3ڥmo榳zi>pMBʻc8.0yw?,\ p~d^߽,wPsLJk4q8}(0ztCt:AT $q^Jl{+s?2#Cg`:zQb]HȄp(@ XĤDOA>3V;!*pam_ˡkpƢƆyPc[B]oޞ6dG\p<*RNjrF,+a 亪=BRB2mߏN y:{C8ǫfGwׅ*z+yJg Pj JD^11NbnF+V &.%ԾXX;!$?/n '0`s*XW!_U[N6*r=*bwk ㉦ pJVka^d?vK,jw4Υ:'VZ$qhmup҄}vyU|yX ^L (RR 6?3r:P0T*mcdW2&H@Xaa<,G8d\Qa9hc߀1\V-*wٳ?k;M`<:_ߝy3JHFl3/FYB6O3* E4$n2û$dtFnsX 1~&i;^8 v~ml;g/dQ$A^@ (F tX"-g{εiV:Xo^ V>0 ϑ(CSe 2PCM,tUM@F( e΄"~niw~.r$Wc(`ΞO*TLJ}^0+ CJrF)=, /@tQ7Q!~8ɳg}9<;2TCt@@^ b.*agB =8p;~t{|҄Ikt2A/]27g 4Vq]p33+)7aL^z=|D&p](UpW"-{|.'LNmׁwMl{XWϲ)_sKx砵")jr wWe 7*Z(àmV#p.iM]?:x}.^Hjړ"noupyd|/]wC=qT?5zg]JVն5UmqR9%1)֘ua6*b %1iiOe I2I~E=9| v.skF/j8*|[celղN K˰\FL_~/LCT+"mg|~V:YU<\RPr~M"Jc'0AH%;p@^3P0 zElLSo}hBeqF)22*%*IpP}RK I٪0_4~&9$HYj<,4V9/ Kq`9.$ feb:tygm3ؔ}{! _ґrD0[-f{wa,3Yw|}40X?;qtX6בR(Fm;T wed+=tljM'pN~sO0,{cEiW^0B{-tye䔊rާI5#dYçœ0s284>ꃑ=X~No 걿+ ]p7+@BjiFO1dÉUdn$*\if |A4T[b<3|4LS#؎C{mϟc:l2S޾A198 շh|^S)4;HٻFr#U. 1]v.8 ,n6Ͷ-Y=3^wvKnl dmyUE'9ZS7ZRehWrN=^ CLh ~Mhxxw1AqڈZ7PSn RetQ gpd)RFJTJDQJI9 F$Y4)2Ռ6+\veiݧ_ Q(. iJxACX#g(68e@tB f)Pډbc'4Tã~Bh|}%* 'oVh׆_ !< /c?rU\~bוlUi۬cL-Nl#T rf-Y̞Qj yru mJDD]Iz6Bwu+7R 4>N ϙ|dw> =G5Y*z ӄ"+c۰.Q/ &M~&7 pNݎ]^SbrR6fN-)bE`C.KE4*J + kZh$iK'%х4QcoF/j=zZE %Z')>jqիꕳկpȤoG{m:סֽkGB E%qH^c&,OqRzC ^,!i_\FWsi:~TX**m@49Glp!hpkrkBr#s"Z rER;xnfmqXpkq+4L٤~2F" Q*6Ep81ps.DϿb/j2jB&yQqI#7F<`VK$[=N$(Jr Ҩ3Uh{p%o1iKW` `TdsJT.WB{G镊R;,u`c *ymIH%Fra]!s>SZLFn ԑe ՛0Bs4J\$_>y6dʠg~*ZId7_eA Qf#ʘl!?M+[}KżMJ͕{QU$X RavxQ! 6LوƵ1!nTE^5jBl? $f9Oxak$4TT yE[g5>A 4! 5)i.hڻr0C\%6W# i}>67}*|v8Mtgqdž[HQ=Hv6:Q;e[8M0C7Ye}J#i暗0 :y5`OJJq^ cДz fOI?iZP2C-2 ӘZqFzP o0EfCB=.XUVaK)`8MAUZbg<9*2ZMg&T ;A"B|?b|; 5SݸF?7M ܜ;,pcTd>@~խh?a!)8HRc/%t۬#td8H{_|N+Z_6EhtfBPϽ}3wz} ?ǽd4? bw~U°0쇄a0\~ Fr 8V)G/̖Ee* R^kJ_X.K,u y@p|91@U^`lKifgw^Hсѿ[ٜfpk7( VZNVIa@:#1&X%Wpb/M%x9fCz7h2}#Ыc:aE?>|~9gU <~r.9gჳ{rv'o:p<ˆSy&izDd6jn)dR(UɐVʪ{*6_xkX 8_.k R&^qk,k]Yznm1ǟ:_ZC[Y|Ӈ׏]C}EɏvfXTSXFw`syc;0QhӈӱU;TDcy1~XX޴,XpRSZM5X.\l P403ϖO >۫s:ɜE`5H]҂ci}R ΅Кw_sHU}X$X>4{q:,it7,qrY/刊zy,])\+z.lyo6Fw3#!Q0ޥS  =)`{gӹ9?V c431]w3*B0+N<`#`G1T @ޟ1ϟһg  ?weu_O~֥PkhumTM4jS[nY!Fc[YV{}q\w ]hR[m#/]uy 6wWeڷW׼GuAWW×~elfsR$ѽ 5ZԆ< d9ڰZQp˘ E ſ$ 贤iM Mgc; .t}>T<"n&c150ŒwptW{O]FT̻Q\2lX!.'4ɰ¿~Uc8|oO~!p ]~ܾ5E*&`{]T~[FXk2kqycLRl2FOP(0qLkEh% S<+XPPBl^ޖk]Io#Iv+B<D+>Ɯ x}X%(J"Yڌ"$)F*ɔR**^}o_0&Ш M4,hA ){­%pto1*BiPKHƈu>PJ3*ݍ2%3N:>w|"}6z2E-dW`,ya_|/s=OMFߊV%|{> 4(#IvSbS)K&R2 IT3‚\J>|RJݏ߁); !oHQz:]؞#Zu%[}79\2ϾIh@I,7$sF N9j0IAזv|څT3hzgk%sUKxcE I4`j8"HaG)Eo?HlK y޵s Ptᒝ%VD!1.Hɉ)`>Ї`cキ`kۍDsX9A, ^\y#}P*A`aP AXq㒕%< |Mm\{3Br x <% iY6]Fwnnzz⟾,۷ia÷ro=lmVr@{;N0"duc8 ?Q$]3+|Ԗwv8⌏M+R4z801iPSϳƖ.dڲ=Ǐ,:1cRqbg;6 q=rR L_5XXlBۮ{}U VAryVAJkHA`# +Mvp=`:R٪ e{( \N!C1gKX`O'2s2$M'.e )Љ /wx<X(%U${m`><l6D$X>BXK*U.O<ϕ3OEƀ QET2!k@85 ލiV0Rwhml%* [+з,іOAC}1c@BHp_{d}:- dYc`sE 9j0\MW6}eB`TKJDHc/A$} t?~HG_/ r8! $Y.-W5:z`R'$Su+z ;τNv(⽙'{-T2"}\[&R` z;UB>WWEDl{[=x(1;& l018=[؎\/5xmn)/}go[ K ΐQ;N#9F8 G+`tJB%:xբv#8pͻV@9+{HuHYYGY1(Z)Blhù9gj¦F-:5tNKhwOz?J9yjR}mANv\ܶ)&ޙ)U#PJ} R)tf 4jƆB&QMi3nx ]"U3 wHo}nk†[)Y"; >oq)Fh:R7ˉ Wc){2b3Dh3J%E |imi9). IyaPyyo{vKSѣox?ڬa/(eta^2ޱfGIn"%`o9'3]?}nr=1!tq˫Pm2J|~n(Q~^DIgjH>C[/*TE9 :v>޷cN-˝hq ׂ&)#5hȳV ^3GcNoPi})z,VY+JXkV.Wô{hCV=x8Ob-P,s4glH.h13MrQE|( 7HޯF2-"G\k%^ ,Ca.o~jvδ'H]]T3%_pwhpuNA$f 5E%s򟫏 }'1 {? =(cS^1`>hTί'a񻓼dW_,X&'x2trN]٫E ØhuBX#h(uH@M*hgLf _΅{.o.o ZYX%p"o]pƥ%@%/=^ʊm6(\TSs~.5#Ȕ&R,OT=_ؖqKJ7 p3 +YF"a@cXHvFk9`E|cg'w6thOhǰP^ 2ÔPUbQ^%~ptFιncQq8bZQXU*ہagGOa>U_Ð9jc GRaDsǷzzѢD6]Тzz:Qe} 3;:YQ~ -L'?z->0p-Ğ,BОEȤvNDytTuH]@2BE ,޵^7?s>{]wZoԼRiсD8 _62xJ+(_x|$__V9y<9n[X@Qz;HtԸh EoI87dsXdD>uhƼW-wТr(X9%=(*eQ4ɵ~mrcCR^zۺ!M+gdi ht珔o9G\8:@cAJ)̦<9hYJݣR ߋkcɱq@}ϡ@|?ꩃFKuU%֧n2a0%Ǜ"_ JQe8NJSDVhg#_/4$!&*Z&Q (^=P|8%8PR1f`qȢ9o/v;649E?09gɭ8]s/䜳8_Ly\NP}zh9]4 =8_"c0i;\$5s;$uEhhI!z8e}#A[)ȗ#͎ܯÐcwѝ_]]QֈA)6X+d}IO\ Bot~3s$e E;h;q{s{bvQM~g,e]y6횒HDc+4r8R'gfzĞs<Meղ.9QtD *`랓$j>sK,1F@:I8@T~Cy)nZk_BKjJrL$?|,]k(>߿5Tw'O2):q'./o &ƚt~¢(Qk1J5@`sLǩri{*Zz..gERu-ivHcv^q`)f7ʢ%u$5؎(ypSpy. b:2]p2ʾlߗHB belk ޢCoձ7;b7 j?֙wBU c}``Dz$^;qp:])=}ǁP7.Kj~!S[* hhL~H'PRr^[Msۊc&izY6܉@*f4T+&g0wF xj P/"VaMB [wWWӾZmGe7 HجDFe'!TUQU;C[JHh QHa)r 4>S}|el}KP9)?;BApadZ $apcU:!59beI6086 C{,]m+*ڊ F jOrA~>pއJ?d5_g#0P&;َ|{۾rJo1{==zREIcժ[{umK[tXۭth rŷN?wT+Xsni&Azh-(%C0h|$^Emk(QZЇ RisIfa,̱Ez}gaXjE|(nr8v % 9?ͣ0%މ.Z?򵌝a} C\#ƠeHhE /-gNIx4 4>GzqFlUʸ>ױۢmk&Oa[.WE 4Y<$ǔh;e\++f5%*Ԟb-qo'){;6Tƴ)m)@c3!I#-1DWvN G+M"O^CݖB(iau뱑>c|}5c$C8\w , u%GFݺweH$ӘQQ@=V,hl71!gq"zqv..`)ZGDEt!E(1lHٶ͵.=_1[D,? 趬\=M=S|?:TG:E[МK$৞AF% /Yutkczq 鱓d]5GܙS7*pB8i8\#`ig0.[e`WI]-9'l-AɀVdz1iZ-.tG6,bs]#^լ]+KNrVM?nW*`00<)as `n:^ڃL7"(4WaŜ+.FEooGOŌ|` bG1Z 1M>.Q߿־{&s;V[aK{wk˜Gkq޺ۋM۴޻\,}G~5v^Z:0vx6Z i4/G+0oz##YE? iE|҄iĈ3~*be,,V4Antطa!ލ^/#iјvdQ[ŸTЈ/-㡧 #r2ؤʐIG#26vJ3J}&,Jc 0 :)lLza$am42)}1F}=UE^|-s /% E:]O9QjmAH`BsLͮ:t!AY-e:XD/`*gHefm.6soˬ?\ ȓl4RRɰkc: :ZܾvpNC7` (vl%Iښ1ۣjd$F'E<5lDDWEv2z;lHt\nc2}8;&)ӊ8Ltrp?ws[2Dȼwr fdN~2 eKeR60SXp;[b<.>ۂ j-(Gڢp6]ڂ>[.̯f+ˮ?urKkл*;e5q[vtk7v%A1u(WLܿm>ȁw ^4zQ8O ä4?zlЖ͙rWN~41C8 NYS_V;N$;l1mྑbH) ]=,/Ij㪺ì{WdFHc"'U6MwZM). :n>E "|}%dV%UꖞEwurٍ6u }2T#Xc1Ǚ{X< JcR'366tx `EFf$nJMV̿8 X.4PsYeAR"*'6o SQC㯊`RJpFh¬[4 )3Nxq?L ̶YӲsn{T.sbZ* m/0(^ܑyc9[TǛ}ρkdn;^JÊJW 8U4RwVgWpr[N\D* ApjeaK]Iyu@6Aӛd8#V2ˊt۟`p@oXnvy}`iN,m 2q]-Yh下e!u?q6m\1ؖmBsMymZFIPqN` 84d*3%9nYn_`ihG\.;nc%fol9X8.n]Qb[p."A#$(%Dn :Z]Zn1pi sXrXd7׺62gxҥ0j3(rN5>q[ E'q5Aot_^WEazo{E(nrac.`'5JW|rvTg"T`X$Km.:Ҽ"5T%-sܟg;6 qIKTz.K3g׷U3nfxAFw_ddS~%֩Y7)Rt]/ϊ>ACc;nΙc)Cϱw:vqV\)+W0ۿPBk^l\)z; ,-{eSV5%Fo{ϊgƂ1<[]DKczَi ިbz(]aҠf`^AQ(w2F)DEDi7nQq ➵k.A:ǜ+I|nG?|<9IӂDbye ֘<=o? lv/2H5 5J LϊƤY)IYBHc?"3kj>$mq4j[ӫ>AjcpkISVpqa2WEj8F̳\2iDEwMLa@(J 3\X09Y ]`߯JHeڐ֙,lP%DfpT(_YAJiNEAr &TJf&yAu.)&2ɔU{,\$S}e/xQ rY`4!96ҍW1)EɍDIF,CefT^ /= V)z {)VYžw6SM274/FT\1`T a/ۗ,R,D>{%o${]qwV1W}ǁ Q MRuٿo5nn?kz@%a+J +^"R1ڻd%ºt]-O$1A;D"pɱB YnyRFcRlt,Ա '!ӸL &9ӗG\.5d  %y_jEux#It\՗M.Kf4&L>Oi_xZ^MDGxEOݷGM-. 6>Ig𭧒Xؤ |wq_/&>Sco$Չv_3 5hpθ@:2Ԓ*]'cjiLFEY 3 oAflۧhTkZ򡚀~xd@4$2/8YX#P HRn|1'9lf9׊2ɍED"+8eBJ9&S82v5VKcզ(ΏNq 2bYly:e=g'/OFc{׋N9¾`J'1Rt 1ʵFc1MN̫uhF)3%De %}ӷ2Q \i7j0_1&WaS1aZJQWcq#7)B߾0ctG'}NjX1VlN2(LdN>`;KUyDpGLr8Rk%8cBf,8KK\3L3ʒ"T4SHX)Rg԰̦rLsD}' 3a+Mѣ`}3Gd2 p SZ"q&R]ڿeFEi4.iFN R!N.D7J[i)ƣ>HR2VTĜg,smˌ2\# VdfΖF(?9G|| TwmQ%I VC%H_󼡋?`aVHss^j3+Iֶ+X&yL*[.#!_./aMf_K\R a6c~V4Gek9̸1=t>Zqk¸[4n_O^쭷CsyEs >}8hmjCN_:xAǸ1_4fƽz!C@T:=8׳m/'@.$aˡ7|S0gMDS-%i;Rv7'Vʹ}t|{pĭэGMtCq4RKd&{] rgwpl,A[#4D/"~yjˈ+E#ɏJX{9D)"93EO6וoyc"L*t oӎ1zEyȣFPUa ~tFׂ>DT.'\|BB})nKW1P'DeG6MDS.eT)3pX=AQX~HZ]^p P:q\3RSkS뱙Čy4h(q܈8ȅ{u#F5yZ^)aqf8\0)wSv|D]d[P$gM5ƶdн3]Z 'C)WjRkYU'an]SM'}z紝痜)|(~0xa]AǦLQ֩Ns rswRl0vGʓQKv B,ڡ֜Cʬǀp-It{zU[;dO B7ᩦ&HhB%-S)fpJ$9z͜8M1*,: 9-V?4j(A U%<ۦnN|V̽'Gz4f\1y1;_$.3dyɆm2D7dh'z2NMN%_=G3FV<@cRHm1LΫ.J#ӓqjc^/~)R@?g>|>{^݋0Zq ^?|gC{ }jw.;}00W;s;Iuˑ#!h&)t3۫K5 Y F U PG1TUPiʞA_4;Ys|VC[j4]] ~m"0'F_fe5.<o2 4>$۷;MҚm{~hόR3.Uv~"i5V!RI @SQ͌ 4N e%ۺ`9\NInF 1R !Colވ)rJR$,NI9f""ZIc;LEGq=tSAσ$Ie ڛfZA<sXP ye=JFkŷRi4eT!`!2(4:_ af3I(vC)֦4(f*N6έ([.wa5NN0HĤ=htLzZ siĝ,sSwq{wq{ZýDz#QOA=r(g8r(g83ܩĪrJZ  grs]"*b"yV[Etlh[+y1jXق怤p>&0YbH`Qq>Nqe:a,z@`~AI`48zvLnC#ږ#Flj[ 86eE[>=Ap{{˱5vQuEJQue\2;,|:ڰqPי_Zg:{zhCC/ 6῿m?x"y^i'~?=X x8roۈR3s˷ lFl)Jr}2͌ȴYT&ʮ+FYp"Y| $zj%F"D,Woofk4" S+.+P v)3+ N4⾘ՒCmi$4`$r4f5`sDމ^ok$;)civ?u^5;5!ǖE/<]G5٬qtxdm:ArĚ4wY2Qw-lV9@+.T)a6 p&Rzd"Qb KK8dA0|&-Ol\ A-CJR+{ru0%V 'GJ.vb6>hHFErer3N{wRFNuhFfIx\<ڥM)3Of"I :,u;I2%zjK^1؂0 JRYY#qIDO%m}9ĥ6T%#l5lb|%_b}~&s{ -ZJMcOm#; g3T-.$׳*XƉAZ5z \՚82o>˥0r2.+^_ɷTsO?zJT/ڦleN(@ Z5cZǖ(4wW;%jյ֩i3=̡DQXE}W 2w6:xT71=xlvDrYC&qC/1ɴ4뿾?gM K&/I!=j30nApZ22}0i/x5oڋ|<}F#L?Ɵ9rHbAS-"s |%SD]G_8 DpFOtq ?,m mS2.89wfYcB % 맰V@݊Oƕ)n1ddosKDmy#a<\#\@ 8Mt15Cph!!αgRhQ=`p D̩, [wvshc{;Fc.%Dl9`%;ɍBNFk, bX|Ko}# +g !ue|=&2F=zzDK;UgbPd1 IVJ[F Xx,s(M \XOS e7ocqݗ^=h|R-XQb #ח^ rgy&RAe!? `vYvP#q\xCdxEe>F9;|#GjPaK(#3T ZlA:^Ul.y< #)4 v%$gn_S0Q}!a[fR1Z6~⑨X `6#B] SZ]dghT 6%DStG:0q\yJ!22$őt|~Ȟ\lwcԸD46 EM37@2|a<˖inBD av8eT~Rl3=¦lAhw3jDʶ֯0MA'`'m9LC T.@#r/s&Aèg0쾋jHTt$J|6z6hta0 Tk^Zp"k'Y -:|!c̱q늚}vNl8:ӑ ~6J:v( u~ök.GƉS4JXjml$SQCpm`!5j|Z/תe'}WLm:9Pmu;͝ *-޴t|斞Ciu?9AAl Sf)gpr {|jNsgk30E=EG5] LmZUQpԨ6+j˺v؜^-ӳT߼tc)6{yU-PMׁ[dUSm%RF97?f=$+̑2KXG,vF \ЏKַÜPn] [טkmbyFoy׭i78^$[+;o#S0#C.ǏqX]NFKE;Ώrl#m ,@Z9{M|~{znQ^W/yt*FUTx]Ejڍ>}eVw٤,AX\Y8ܭG;{(,>v=rrYtzWA<e'C IFЮkmt|A7iS'jOmh5hV&F3Ў7[wYk;p6h39φhz ,l奸[-nR$ކhѼt 5َbNO*XTI+<;wk%I,\8̮r%[U=z/D$ո >j򆻎R+6|SaXZ `h/[T^tbL;9S26#/@= -`{T~n17Jm xff"PZPsb58a ^%ֱdn`NՂ) z!8j0 ulO}hF~jӐǁA;FM$bfi1Y7n{m}XGDRӖUiVTLaz>;q&SQwԹEĘ 6"! CeQk b|C$D_0:qva*f{(C7NzU9ER5RqV`cbIqP**9h`K`>/5x5V79.? 񸒵%iN \ j ݴ![gp%.SbwƟ8!z#W C8ArИqalp/W٣?9iOjH^jyqD KE "1|"e1gLHJoCk"m xxQbbRIe+` @qp bk]$T"%$:2Xj5K-j3/1S1zNɢ}nwo;wC. ăNlzf@^j _>r7pCrr9r5*vR'/Ww@N\ӒhoLj?qS\;Z RyNoig2B^9 _>(8h͏&eqlM"iHE8^p$8x)S8ck|[}wGNҙ[*Zp!@hM]Bkhc6E':8yM–NvAjRwHo ̰C(3X%etgr5=3`Mx >T,N_txz:mO>  saNl]FRR0li1oz^e`_|)&;#bAJk_XCF[h̦K1I-MHK 𝁈,ԋ9.6^O`cV ŇOo370melouTL+NU$Ð [}dxJV]$;MP9-`â%_,EVZt]*A# xb]*D`Joqh ,ыuAgy= ܛ+fs$zmVCԅ#͍4. 7cH>i;`d$g.k>E}fGw#.fD:j=tvGe\8qIG=h{d5dzɬ4r[4bipY5,~X=6Ё)bcqqmf{Kp XxdQj8 ֝:>áÃaƢWv9=Dl@-Ϝ;6dv`Ib6o;|A9OYL\ՌwN;MNbZﶹwTnl5QKtlh}Rz\TU)u?@Dadhoer9V 1ja 51z/ SzL;=uh9@r=rD9'( hZ9uMfnAĒGWx&sv`%&9cRMqj:Pt_fյ (t]k 󨠶yBO8MOҩԽK=p8 awC.F"\zԂ<=״z,DǍEy=\တE`A!r9B&?%Hޞ c+d&nA! 5Ot&a\]#ɻe!:cHt%n:'~CЦ-A7\P `GL{Mh տt9 u1mM_wøk ƭ%wmHzZ,rlxb$AvM0aw ^=ȗXvOoQEdIGrnYGGdUWuU@,H[3 j no`/E CrLyKEX}WVRlIY!`3 pI) y%}46FU!QYx"TgiY*H,juk2|#^ YUeF,j$2D6_L-Ϋ:K[,/48&"d [ k959m@Q% ֺLVf|&FȨK@N]os kss:Vz~mzefن݉ AP;m#1IY,'uK+ͤ|ѲAgKlm{̈1ev|f|2t6W4ybJy3YXl Shϋ1X֮2%163HJR[Lib$P,yˏ"u)+K}:qtхOd4ӗ?L@M3pۮ{W  },L١ʋٌx[M5?}sg({٩]gnMqm?`I27- DmBk9&[[ܡ96k@lxЄN 4dަe`Dv`X콷 nEPR3;}{{}9bܘٞ]plNA˗㾿'k=FikW>:^4̔iZCL]>ۋ_N9kðW {m&s06t}fTuFQߜOubE{\krsrsZGّBixO-e^f{Zd{:15qb|]yeFo5ۻ:jkf5:c)[sD]g-Pf{lUWCg@=4Z7]vR<, z\' fzV̪pVXJXr1ΘMAet=QVpJ]`pP-kp:.Hhc-'8D`ؙ/^H-jKy*Y*JhQ.!3jIk/𭴇W`5:qPHwt ]KLbB }X8VTjWj'ׄ:\>s"zX0NI[@ˤv"JL"ؖH'I*n9)!J^cuM=9b2tȀvJeTjʩѽ;vMTw݉7ٟ+mk (fcw6kN|4Y5/kFl˚36LwksOدximۻaG^4~-m^{/{ ~ّxT\nȅ R1{uNWC.5҅o!ra5Ztra̳]'Vԧi5 "5"9滺VjT4`P%]K9$:]]"h Bu9 }>x#MJց +[꺑qƷxB^FBԵIbr&A"k% cL1+dT--MGCJi ZREx)ے9WKސUI+L )d$PTm3V)-%SN&̂\:2 mM2lb= 2w;ԝ\^wKx]l-3v`'At 浆@mqۚJ9?a)Zڡm.m;)Ք^g0VScHY*9m-TϼzVSz9\[h$9v5ZM=sXXHRR;X&갵< H`$>t9$j>IhCIE)I8$jgyJ.`^W\]vˡ$|\Rèz:}搥RJ)ҦRJS;`X␨8#0 2Ɉ<-c͏ˋ 1>*S/kݑ<Ǣ\*:pLf>|Y'D5rWcP!^'G JuQ 6%t|cۦ9'Ot .x/6Է~lAoq~f{PT9_Wrܣ!I;h'˷?_?Ep4<˲|$fr])s,c@t☀4Jx"F>2 :&-@1IjIeb&9JGtth2 ޺!H@oG㋛ؾd;33{{}9*lb{u=6ݟ]7˫=+~H~awlz`mSL/Rפwvکg0[̚\z~N+$հ ˺>J ],Ti2M'c$Y쓲3yuqvtYNPv[dew<ոWcs%Kf?/?|Uo ֧t2ګa\N^SWNX^S=ܤՉ˔&ŋ@F{)"d?[A43[K5$ֹw3ZI;TKS)8) |<"ƭڴ|[QSrT-gkioAELĞ\Ѻ|ľNֵ>ړ>*XWv&Ws7LbXnd_qhs=hpb|ݫ*ԞDp@R>tOޜy?ttUG2BsNb98 ժ߫dMoWzU֪[ʎ=Jڱ&BBji%s½֩cSÇZt'Kir kssq5- :]V(RkBhZr 0eтL>{! skS"Pu?7sz/= vOaҀ$Mfu{]vh_~{7:E\0⻒o'y?XH+3'ޕ6#b #xDs`]lyU>mѽAISd3EjN_$_O_//6;w?)s6)pZ;7~_esV xy+01NFKSGS*:R4 )0{`טҠdmԡeJLzI mzjFx]s[M$ﴫ8M{NQEc.d%)PL% <[LV[MƋX~ $ f%HܔxxF욣$-쎣ChwGlug}Aͭekv4A40/﹞\р8/TysS [0K~8#W8KB.0;m]"Qe(}UgPUD;<$ݸTq(tueQOh'Cv53xM:fmkQ٫9ش+~u:\[[7lqH.=GUti6P[}:DU(JoFXfVxޣug %$u i ƺڰ gY?F$xx?n'/cX#uQuw'@<̯%=:F#јFOjK_ͨQcI*).J˦CA)K{, vr8DnAk E8g#8`R!yhxxmLBUTxVL| q?nX@~PJ1K4j/i|9&ͫkZ7 EH-#Nx$<#>ZFQMH7YYK t3'T!2I=3%DW xٟY .|.jTa hF& u ePQoɒuAF?R AȒCd^49eՉPi#.@"}K$^|IE7yf w/.(?ş'-?M-ҫ3;"fD#Aem\ ıpAPH*pe{-; buƳz Yg-ɐ4.r> ct#7VhC˜7gVt5 + \S;͕!~Dp -z͙CNܑg%P6)ق9Њ[ XNk7CM Ӫ.B6̀hܟf{k 7^I=Q"8$EZN$B mA iwKCo4q}ϕ@)#Ű>8DϋC8DϋC["RQ2gE}H:]RZs/  FIRu&{k?gw0W]t)f!n.q{u~^ґDظ]r;uMόJŪ:ڌ1rߺ/_u?/LR>M:|z}tu-,ݧbhOȓ.`|gO} T5>\Þ_uJB 7Tibq isQMù5W;ZZLE﵂MjǻKV1:!t Q9K >),ET_j%@"$QڞVfa^^OnMғL xB8FWaL?ECS};¿| bVI5j?2nj )v 0tk0dY: 5[zoZS/LȐDCdI ́㍦i1D岦|lgCE oᮩ#7Ý"pe, &JhxcL\`bɋ1s4yEgC=)75DIUؽ݌f6 g5ѥFb 'G"Tuԡ0la$br62@IT!{u6&&;gM?֪Mk""S+49[̼uJh$IߤHԆxcx|,ƘB+E5;1yA[) ]l:r${]4hTVq5%arʼn}/,aHVR%-̤9IOP*Bii#qٽ'l8 ^mT 7Fnl:o7?}|P<$D.pˌv`-T `x}!xl i|BRw%M~2Tlbxn˥ ŷoalG@fv 9v&P?jqDWJU;3` [NWuhtz3Wew^Ky;G1gRO>{<3eD^jA#5[f$/C %GN2!DF'N%A^ m->B, HB0q}J&t;u.X`FBK@(m tYQ:%.f΢dh`4n&*K?i56ҷ,̉RWs #7.l ^;`6">B,6:'P$)VrkϜE&Wҡ͌0Bi恄332kq%1섐iEgQH†âdwTfTv|v:4eNC3(PtZ$ĘMYET#Gw1KNB&񷟿̿:dmUA`&f-0cwqHORIAz^4ª׼x~ܺ{pJ2`'=pVđo__޸xTU6t+sFZo_.Cڂ^R̲&߶fv B 婈łh! R&J ! s2$x24C"i+&A a!9ZٮEBMS[4_7lCq(Gѽ*N[8w!{DBP-5\w/ײ^tej>U.ۙćЖr^B[Khy?D=zʢfΥ!cٱkU1%/թ5&VV6{\.~:dw?tH_*n?~a-'rg?uFgc16wv۲4J5/ XimX ccvˤ8Gݔ8lڏS`gXӼ*xs\^~ $=D5\^=}=̷߿\.(dlյm*'kN۵uNvEQ6`\;A{JW^&bL;#x)r!L,ldFFVh"BFH$`XC29'j&S"qG)%#+wCa bq ~ ~L I[߯K Ķtׯ5;ʿE9HnںewpG(_밣{CF%-JZ:X,Q5EKs\&'6RB}bQf l9y~ڛ#P9߬ţj ^Url~0ך*jn ߃~{p;ZLB%kx9-T_+t3=x[%t_h?O<ىgRj>ƴZ>,a&vL -(bG -\(B\KAQA  ZD DpQ(q0vv&ao7JQ-0B߇<|vP 4YKt<.=&q>ӻyMf9ks !HN@sV)RG+,7+Jg+dl*ʺ*bQS="ةk1LwLQtX4^9Bx9 h8 zH=Ct77sT& IDkOTqo znԝz2|1|>Mt{7_8K={ u!NS~ ("2]z9ڦ R.U.s;e"BQH) 0Yx襋A (֗|g(/y*'lH,Pcl qϾJ{UڳҞ}U߳qy 5ҞSb.: XU^Jq̮:Pz$/_Yn6UШmr]v=FNP`0)1QKRmY(98^ J'HfqDǰu,S(#G *(T${-jY?Ԅ4 @ј(HYR0t^{f}4]:LXc\ZZ:*r)UP>FWLU;*j/k\L/wʲ[꽎\ZT-Jrcé=H?a\裮زbkW)p*D2Vo2*LU56 D %]%/<^rrMkgQ0UϘ W8m4L&@(4"5VWPрJzÂ+"ꅆYOŘR$4]f}}v qje b<7 Ku3O0:U,'~Ar, X'7=f_//Gr__(s<nS$2 9-e LOe&DQ ^Sjb[dA=۷Y RJNX]Ozy8hBXMx.KL5 fnj\0ʺMOMTZŊh+=,2R8]J%l tRݽdF&0[BO^zEQ.]{e42>˜Pj^,inA焠DrHhJp)Qu+Uda.R߻Z@(ˆCK>4%M8Bty2E XJG]X0H`RXhꀤeR5<]02:ðHB Y\V=hvsP/IJR/"cDF* CbM<ѥz1RB0U.L٢ѤT,JRiome%ScƋo𮇻/OlJ?r45/ sўX́&HM1.hNY02銹~ hjMfo?VBKP:Uy'?oc餕2{M9ta)QfEKBu l3twvf{Mlk'ܘ֏?H}K2waу,OZv\O,<2̝:^~<~ a:匄FӕwVy [3ÞL lzN+Ǝ;:4ZRwV˥68br0bLsb)y]>P@$V@=y͚.Y5'a5cC:2vuWGN#e$.~_ɂ;Ͷ Hro7,vut噹<+X4-q'i"yy Q@d5a3%o7(Hi C8ֵԛv (kvbʒZc&GZ#rDo;T@Gw PiRa b(cJ41=PX{)y\O6|,ow7:I9xoR׍kOt̉oKyy¢c7U $+cqafxeM{^hu{!ʙC K oeޮj}!85n-Ϫ寇r5ǿ'&5i6K<[]aޫ˫icnQ䯽LBY C#jA 劼&ĤY˸_u]z6%Fyp J21߾xH_0@A5 jƸ;?J>)~ŽCegp: fLYaKI-۲,bl+.S³'u6lsͯ6[D=EO&~t7wxhъY@jdG3З)ve\$fOnjzJvU)cBIY䀪 9GblMT90H}0VY(D q7*jK=K1ٺd_s1{-彄a1JW"itD뜢hQRE2(aS R{7]Ddڙ. ^qTF9or1]K?:BhfY,^aEUɲR}3HU+$c6XT_2X"iGu֬%G&rrO79+_o 9mIU{ZK8S*"Y^9߫ň.k-C;Id(GhsWe=Z+% 1NrN4KQ>8E]pџ~5_3IC.h@oAU0+n&LL3=8wOc%(1t`=WP\1Xe,֕҄8:v=c099RY Խ<)6r0x2`sd9vW{4=klYg#w9suPK :qGMVg4_OY]] aԗ^%Mp1/M3I=\Wm3m+5xF ZIzӏn@{QR)*uwZM'waAK GZ):By Dj+Tpl8qB0r?5?,YooFoS_Qׁތ~X\2;b?0Mmb4?cCmЇh;^ 5X]7:?6%_f,sl\e9T/@R|l\*5Wi5u=3'&E:k$FŽXF0Zȅ8[tz:xkf\й[ӍiY5^ƍ.MMVNF!^%I-YK˰s&Ii [A2*Qs U{$M5TI<Лkن$eawZg4;l_s IE6fmYiq ^)fA$"CpBF*W[;QAA{Kp-0Q i*ƔPbLr)q råu׻wQhBUPU¶:ךI0^2x*㘃v1 d$J$]e<XG-m]͇Bw=ݐ,)hv#$;M{[i"73M,U2.=#w:t}[erg,j0{p:.=0k`jkY\S-"jfE'/݀2Ee=XK RSZUF[96`#H$ \#21j.B`.p!D#vdaP{u,kUN۟7|t|g0;띔H 6Kqg >= )<=Xb G8 tP46QBRĹb?wV9hv#-*cu*&TRZ-YHy !͢8kI35eafK` ,OXci(7 1uFqD*PD!"T9Q;5vM7J0Xv-+Op37Py! 1^D8=|]'[FW|882ֱXcf(Zy'y{(2&fd5 y /Qh42p-o>փ+cؼeˆuSb7'ڒOfՠ,jk]T l#lOA'>cpZwj`NÙM>|#ycc>wlY^Eڏ{Knޠrw?>}'B#[,k|cϚ$h3ksX?BD").5Vܥ]*} *ֺ8A&M!ٟ#?ʊaocC,«54W8c!АӘ{E*7Zn7VynMn @jf-`ь5܃icAD #ƲѰ|N mFVD  CB1|u2l)Jy.[FgL"[4zn˖ɓn ėgpvkzn[a" PqI'B)fA978*!ߍܰتHv{C\?=RDľa3&>LǞ%jqZu2qo{Oo޽~B-6k;q,KYޅ\ߧ  ݢ~@HܲcR Y$XP3h5-zSO)9FYddXgBR7AF") n{dZmݺxgOHV.2x0Q3/1!X!Tͧ_6˵l (vbv@usr\>O^~)GYB{ȊLc"P$Ð`q4CΉ'>&} YMS{0j贇P`Y+g0yNMmzo.=v=evarw (kβ4NL-a7$`c@@ג5AA`]z,j4(׺3N48 﫮o)-"8N)Y?CP涩nP2DUڒ\ϗCyB BBW!hNH8%v}wm} 0ChJ'F< 'iyFK },t @~w#"/Ká } P/IS̨| L1-ކ")d=$M}@8y|oEJR'c|o&llaد"NXݹktQNEǖ(fͳQ+ӧvg[? !W.G}yLn.?F߬SHkX<M^<^Gh ji?w! JTU#צ="m_?F_ܭy>1vH0Ytg$5艻d]Kf ۺd!; ]f%![2%![2LwdR>hB|}D>rJE4J,H ^x9;s::+fn=SWailPUjR^`|):鈔 X F7wX$!iL+۶_{ح9KPSgcSwTiA3f]=o[ZԸybNuʰL]m9h0Ts_rļJ> =Y< f.cg"o7?dD&Wwky@[q('8E8^8@J!!5(U!֢ZU0ĕ*(3k^Uji)miPá! \tѷ9rۼ~l_ɷm(ժ4ꋠ{_KvmpfW2=X(9-"\n-l9[ m2#$9j'r;dzڟL\dZ+0t )"ݖwrJj$fGIu8̎> +ƘX/R-͠xN 52}&#d`Y kd&[vnϽItP NjEEfcoR:C3} ";ב@wNӊ/o*8+ V=V.iWѪX7%VJԑF)MLQDg5))u& y@z f#c}4yp:hxvIH@U6TPT9%DјPF>rnQ&&8 -XDdc#0MU %Ǡ ;Y6)vx~ g5r194adD<@>d7΃a|ck 󬠘z YJo>^]56j`|(W/*Ca PD[t$G9)O322t:cZ(84NZJ0^|e5b V2g*5F!j]C$F>!r k Ua#Yr"]1W$9A R!8̴ uӗŮNT f{%Cm9Yɾz*fC ,hS]NRff9CLw!)x#IzX|BmϚVQJMc$K4t(ǵP(a//mN[|5n{d[ Y/oձ~v%q9*`}k=%)MP7LyhW'zxc}-ɫF1jTQcAGW3cʔq]1';/e:zWRIN ;؞0 0!:h\H|]:n/fSg~{-|5;m^nsm=?\u~/7|;fxWڀ 0((:'a r>D5}s;>`[bYJ 7 xv]a;89%'39KKu鰷̓&)DMoXQZy_h ?λ#|'ˤd,y(SG;Z:U<~2.on\]NǕvIFˌ[l;f?vH _P^X!pdؗHaoEhᔞ! /[6K2 .럙c fpgl+^?GF|͙_r17ܒ$;e4=dH; M??H¬g{iҤŬ b&YB 㕔{o7g@F0ScGg;sK|P9m[WT:}<stswv8o~yw{uwqu{zp}oP̽Oo?^ɛɅu Gww[i2_=ژ_8( 6^&F8An c<ߠ(.7'ў"DD FGVA@0&7 aQ**Xk/-H^i#$Rza&D2dY-brR{j3FKf -k_lwg@!ht(hMр\a(TD80Q :hYrnwYA-Am2f vsJB$g}!?iލHښ}HȢDih};~hE65G~ G9zF *kbl͂5@0:WRP{Ҿ !cuq6co)vt (*`O<!^;.E dm;A֝6yb ܭk0)4IŦAŦAŦAŦe+W,V+eUȘSJe?9PSIb47Aρ`_דwżfCt=y_4CxC"$mb4[>K6y: C:cnb:/J(30^~HYbu:[HyXTdGe/!RAێ[6ypЮ։1j݌s)@|vD) )QRX%EV.e 8KAes*:#5'!G=ջty@Tn-=߻ջݣ`B<*/~ ݺw-w2w+l_%/fڨ倊U= ͸ʤ0WUY$)5] R1 Tk[0 ( u9#5Ι XRR*POt-Z(ٝߐ*Õ*d?4|: ck#ǎ9rT!ffR-"DHoIプQbc"KI(aW#LB|8jse?g@ KvhnYgdU>Uq̿Uxb!*DwjD#mHnPAW>ҙ@@z!NpDU^@&wPr`l0 d%\XndL<7~ "6Aa@SHw 0."qEQZҹR!r6c+QeHNRML5h;׀z(6FB#C9ӆRgʥ TS*(ѮGYK%-1lvtuQFטPOfnϞ2=2~"$pgʈU. ki¬!Xè*Σ{-@YCF(r"B )1LA)qp[Ht.Ga b}_sc'dz*EwVm5n960λ4FsÀ8~&zb =cRY`<[1y%bί `N7R=A9$DӞQJ hR\๢K1o*%1F,r`u)jyכc8hs p1z[F#@^@z8mC%y%q秧RQod +x+zxt1,cרށ;`9(dC8 Cr (!o{Oi2CzD=!k$j!o0e$~ԲrXZ"UIyMSãy.1F~71U#R6a֯Ŭ_w3b}Pv?f_qIϺb⦻kɷuoܪjKhN5m35ł/ I]ayaCnӔ:; n1~)SM'Ӡi(ׄNҫ ZԩHtQP{r8gJ7g'.u:!wHD3А\EtJ %W6TVsoষĀd@^B-߷ ?w> Uk08.j|ܩsǼ楔To߶HsN213g@pѺm ϡǪTU6ESxz-DvM7?/ej+h1,tiX-~)3/'@*;OVxșS!9쑒 hHp ŤMHڪJi`{?`JN`{+LUK2[ KfhhZƘ\ m2!>ؔ G\vݥv 'Z. O%(3pfM"/=:"XzXY:)d"uqta(^In\$,g"/D^ J# ?zs,Vо þ(0E#_5Tsz2;`M~-f'nФ\.KnS1^BKZ Օ+ڞDZbWښ@Ҟcy~IbY͵%Ĉg"BOk-n?lijr'83I0t3A )O~d|KPEm}\]6 XCP*A>Lݖ&?je>n:".J^IRպW٭W3\\/2J< +4ӰO+^’ZdbJuN^5s(,w,T^E&?n\'B¢HzI96W|Ƀ7n* #]G!%K>?90L3 ̴5]MA#~>x?{䶭JoqjRJɱNIM Q8v߷Aj$JF$[&r&qF &!$::1(LID"8)-yIo%2&r Bl/U8|`=4~!D&~%gb@V 7gփѿ' {N l޺୛e<جokLзJj>6 Mta4J ,]hܬ\@ IΣ/> kKDZ7.c|[7nǏ#=E: mDXbYM ؀bB.Ӡ9o6}LYX]`f5\p(pF jtVǛY֋/%ۢ͛ymH>GdydʱnuLXs␂BQBily̡#nMb6JFmoGjj zzz:u8 o?Ύ̶d(\x J@m.`-FvP עaVq'WΌOx2kLeeޑ++W fm󏏱D/?,_h, D r0Z+- \DܶK>H|_SYw=H[&mPg+.9x*2JaztySAN[xt81\\$HPbjM2|6.ؽD$-L`B,TӲVXQ!ClxRgQ᪺c*eU u,]H wOfG^s+Btsʡ1/ܥP&o`h¬IHܨ0(&H#f$#΍M2@V3k7H. ~#Y{  ((p]ǃ] K# QA'&&P%FQJ(B joKÂW GRvףEJ4&`J@iTUFR"$Z&d3-c*rKpj7V!TKQ^*A5p4b:'! XԭGu'{plEYe!`s&9 i5ηN^Ϟݟxit.mT GMpB"zx6ƚ1>")XUcc;%2E9 uqEHanS9ա?^[cN6f%XJ &---nGur0,niSmy7ȓ@uNnud-wz^UQyZf8%PZv9K8ҕxuxc~اbLcLzݕ:Gef7$!p)8nR\zn:h֔F3Ѳڭ L< =nJNzn:hwRmk7nMH?\D˔\K Tt%7XSt=0+(=sJq6KK6+Գ.U: 'hqj͗ݾڶmU |d/Ŷ$Ŷc*0A۰j٬axLlb~_s%ӟwZbLy 1QMvRJ+DT=xw3WDG mD)g,"BTpD%R%bll044h\κ]U@ 1Q񢪸PǕ7GI Q)։7bEkw?j'<4 V_V!sXV% bUq%ޖWjHD`>Lsu(rЄh%!v:M.LbKj:K@q!rn&`1iT B_ d[N𽟯/X0U7}evvKF siYsȬ`,:"?V+™: f2ܶ ݚmK 28D $ͭpRנY[rnQnk2 zWJeo40 L'nmUC怑>b0q>M"dORTّߤ9S4>tsrߡuݿJ_N?'U?}xUY%:'r_Slz7dkeb=񶫿 V״S*v45=L)'È7JGJwNlq3YQ:l|T3Gagmy eh`cEJrQ:gkA>  as!ZkSD]ؠΙb? ^2 _кB)-G?nQdk6\ࠦƀϞ=Bҁy*3sC1ӌXyNk|В7V4mj^Z\]Pi|U$-%ѥ{?^h U$&ZІrUj iZ ׎T t+4-~je#hPZOƣlwz5p˦Bpɐ24@R)$Rpix(M\fȄa2 Y*ſQ@0%ͅq (PWH'φEE v9HJ>ީ΄wy)8X#}/ei0 zuP7'V|/lvۆOFam?kTUqŒ?\n"Tnw1㻇!/wѫw?}AV4)Bo(EFii"ė):PnpJft0#1=%\9Nw\ (Y'KEoD>(i rkA/yV+`eOJ-Nr{77 ᓥ EҤEɱ-h~K$q z+hb}7݋,=P\wӃt[p9dna]L?xAh7+=Tw>=P:d@Ð su'EӸ>=eᾟ7[-njp$߅*%t;B%bYP*wGh RDv,  w1]]Iw9GMp&Hsy/zEaTBrl|?-#_gdĚ& 185 8jOдI8[WcЇ!h$]fYS(y֮]hY i*:W'X9q6i軞T7Wu'ߛ+&m\2bM|fX4K'`:adž%%{Cn!Kֽ*rgBs0K-H!;c1$bIi# !CS9oQ$TH&ЈR1|Bg !.8 YݎhDɅP$F)!6mF +IBc\%x Ή\O'iVCcKM͸KUfqr",,ij+ңhMϧlǗ oț׏%ZݏZB`bA_GdZo%f:v Iq?gfv2?3~3Ay9vF`zp5 G[;O)9 cy%Iހj.u] _~pfP(jj?v2Vi?3c{ v%yz{?c3#xL h>2PHPPQIm,dp ംl[v9 /_ 6aZʄ(EH#,CFsb3HIH$\&eҮRT7L$T.5ѧjti{Pw\J.(1f&&fn.~v8|lr?,PNoVӉw4 /;Нc1tgh}a> |Iӫ2~9z[P.g3srFv2E#=̶){Z ̫h" Uh><?sN84&I0 @4l`1DD9Su 8ʼnQ""AچSkbIlbTDҢ(&(BE:8F,sI LOG/%ɹazPI0:.\j񂷦N@UI*3>nޔy.SW#/he4SeA[/=hk! tQ[tK n1C]})ș[ܿoBLRP!DRg >ٻ6/N_?hY{];v];%&[{)8cT7vhVQFaQG@7ͤ'%>]$.>2%(MXR|F;v~懷?+m#Ibv~`wv/գk(v}#IJ*^RhE+3222+"r k<&[bPYR~4赨?\_CZF~@ SUf\C9EVXX꿽}ΔD*mY~g+$x|ьѝ[wJzgA%G a8 p|E7\_\ِRlcmn x8%hdP(Rljr{ҏju{' @e'{ EQW}gS=1w ޤGӫT`n?T)\-\Kte9zy__&4ߧag:I]e.Qwf]F]GmM//S_2e hOxT2bt:seRIU !$89l8pc+350=f`n`ṥTSU10*!u (b12sZY:UGZp[*0=]t-ms_Bl{' =yclZJZ?#WWWYh#IdCvD!t0E q u ^mjZh>0ZU;8EH)cV Z},a 2R(;>al?ųz)~#HY&ܻҵe]" ,c8]} 񒧾9Uxa-,)Y0Iu 'qڕ}|}򓙾<;/MLɪJrQLjղy` s@1Rk$l+_&nSETN /«qZ Org$jGeJJ"T^z(8QǙ`ĥgSΙ}ߔ( ?/K|]F`Ie`|&%Jf id@#l#\\~lZB$R=d w4:.RbJCEfI̅I"N%)O[e1BҀ-Gu =%&wJZKy5=%1Yܑ _ŜvUcic5A1e *dK-DF+3+,RZ*#\d:b1Hry7 ѬU}>V)Z7ff(Z tqRцH[cTʑh$2 qasJR8oQIc{$QH)I88c E{zȺwwiDx}+ՕS'A$C#` !z P,8_O'dD#: 4-D*"[Jk`\|:qE\QLɎp6goqˆg' =\zEsG5(@bKz<E#QGl6 G&?QAK3\"PȓQs> wҜ]gc5FrQQa vĴ+๴Q_ Vj$v$t ~0- %) ==[^5#ulLV`o/?_l{;0oofahUp 3Aλ//&XǮn?99fVgO,ASFt_+zGкU4F\fvb~]ȭw`Џn)lz9suy1Yq#oC*>[?)QU^ܠNIzwXSRu׊kGFZhԻYqj p5IX$)cƇbf@0^AnBSzoeG*eX{DFϿR~;Ck^JRf@JQ6ޞSD<r)#{Dr Q"m0B[Ƨ|ch\]PH߅Om^K5i;a46LHE PINlS^\cf-_Kԩ2C"C``6MOlb$`Hx'ڠtT Ĉ e(4K)T,: FqVT ;S(VXGRehW1? >eƳ( 3 EK4+G_Lδ1d Ix | K:X#+Y!X.t8ldܬ ,=6UHZ k5A {XajY,#L1!fPc2RX )nC6CZNO5it,pcL%5ZGLaE6 V(TIW.q[6dvy{l kkz:z2Қ(J e:<- Zbҧ`Y1,.@dg0sS= (L %W6;Uk_|+߽?_J^3\.OcUhΏ[ON⼚Q;hzhD_o؝powys':z?.LEJQƢIW2\߂iz6xԖ}x]ώ}Eh6}O̵J«2q;aVY5S{3Shce#5dC*YY(ew1u53f8[gѹ+iGyIb9YWZUZD;2.'Zֶ;G3t%ZY-Dhٸ/qK_u.w!P2v:Ըnf~սhRc#D!!VS6{wcDLʲ_o!zj`zGCvquk` tun flϠafuQrBaLeXy#EpK呋1*/0";D*4N#۽(,ܟ5pU j:k/ŽQ\+Qç( o]4Vkӻ^c*(# uyVFBS ,C9E8om= F8[rSF6w019X\C9E 8EQmǹa(H.AuJhb݆\EPqo֭X\C9EK8О8e5׌`0Y`T̩Dzzo8@^SNkuQçՒX'XC !-.XS CQl5c (5Oz mxHR#H3h.∱q*AGRi*G՗Im2;U`gdR wv߾I]*0=mإ(n8ةrfBnץCڍ5|Jcث,AFPiM*ܡ3^e !{m0SS9o~MQ1Lͺ k`3h 3{0ka: "v1 4+% I#dYpRew"|dYϙK[930=OŬQ ko3X=E6j;%rG^c͔__(Au/:șJh g(36)Z)F lX7,([rSF6w0ep k`3h-ߣǹaң8TYca|Vԏw+4wk!ϜsEkvֈ\֒k^}t⚜zU\0]f|sǻe =|63qyүWh2_yM>xgeo= ;?O1cŸ;fcC=fFH `]VhŵZid lS2N;G_Aa4x'?%i>x ?'w"=Kz5YN 53a#IvflXD{c[ <:IX 㐥QoV$ 4Ѝ$l)@ue~YyUVf j|AW~͵|OlZHkKއ_qtֻ錂^dlIX'|{|9Zl[4U]A>VTnUvknIzWxV0p;hy )Aў-8ɟU7K,9/Yr^q,L(FüwZ%Xt% PotSJ77 oC֦~\!16A_//w@5דn➳&+L*~kF:g3z4EsKVPq7WL xU7acްqZG( NGbrI>pp;I썍ϟ.$ 9k~0nǞ}rt*c)&\ӯ_f:'Tr6F| !Ee{:GxQy NA Bb1Δ2>t9b_f ,Ebo_\+ q'v/D!:Q2tw> &4'v `7}vcr/`r5Ro˾0&*\\멫"o-|woøaFe:/ )\T$Vzx A9 `)7,N J-+E+aOYoAR;T^JjXWUR+$B )}mQqqT՚옄p\*lR]q프P|0p=^/iܽ+\9>R,\sKq KjQ(y\ Yu샎z#e>lNLr0>X=j/5&ڻuu(R' p$&~*B9[(9>ᕩ{~xV [ f25BQcemU17FmMYJn0`o5~aW _EX9 i^lNQP~oH%#`+杩gg6Oy\4qO%.^fl8H`N-[ZsRBlaⷽ?7I) ]ȷi{4zS"b@i.3ʤ*Xl3~k.eo7]~kyNq+DJY?u67ZO" ڤ/np}( 2Z V aqԈG>^Z_AW},ZozV#JEbc) H$y(?7U(m)G Ql , "*1'|2qH.rqzUC$"KA{~@vW/ߗwK,4Z2t6@cX]i:YB$$bOK>KÄ%)\mBʞ<ڄ5BIdId  'rQ긜 Jо])ZΫK:"Zxs 13DjXKiPp呡LÜVLMO/> , HoJxNRen~CLѣTUH%QD9=JNW̷[fʘCw RSX *̟bmHv}҂|?Fb}aDP3AR>Ul3朰9\2)ն*0P-qRY!8I)X "VU}L U\W&0 QGX;=R8*t˧X>pa3\%\*ŊRe—K@#"=m !]: A9!bPsMT&AȄ2jPdb+AQDџ( G6ÞHgyy I$JYJ-5.eZ_NA0kKn!,5F2wE@UjT>K؜4~ˁe Z) [d_^ܩ~Jpj!<3UG%ϸ=𳌍_9ԿB.AgKAEY~|1CDUߎ:2Dd&pHQ2 Go Ą Tk/!s9*F"Fe"91LH%YaXLJoL!` T@5U1)ZZ (j{އ/CJ* $c ZZv\&yd4K`PBcg-G{#1}m;!c5(U,cA%@%A.%XJ9&u.ȲKeefn.EH`=c66z=Ǹ0= R@do{ s->5A[ޑ:0a <474fߙ{͗VɧqRY1ja%D:H &" Fs!.J1!Zl'S!!rPQq(H/:5H)Nͧ, H"CF1E+P)ŸD$NT('* S*v I@Ne3R[~ǵ4؉Ԣ"Ua sq"R8FQxyn^X|[:m/Ix2XĈQYAsxB8j@9댢1bdh$L/Ta䬐]\dpJzEJ+/>{wO>폯n:#-)@,mۚ<'"(0/ҹ0Y "2boPJEtp4zvz>RFO7Ͼ!k@p)"=?o!YTX7^Jd-.a\*ScxhLĚ47@L%ł5Ol x)ba-Z:'"@8Dqa!ȺwEcP#F6({nXD+uNd4U3c#NfWW|2Z_Nzc[G3P ɗ|VӸ_d5@EfTXT`ԃˀ4J 5ypc7d6ZJ jHWIy ~^8*z/ %X/G,Z{w6gv=nk d#XK9/}4AD{o9R !߂G;H>F 4װQs*YAp?+R""=\u4g/$BĖBd8zmܟ+ FM!3, !!Nr#<+)`N WEE&\g\0T?13 Ni("h+ .#{A%vW8X`*4hҁ:`bG^ښJTu* ZxkYQԼzTfq?@mN{+駴 wj_`Q(8Sx*RhCղ% *+Fa<TâmGX;lWܺXWGgql4] hD(aa;auzBl[ NSr`TIhz8U EִaWaSgN ʲh 4*8(YDP<&+ySֺgUXXۺ=aag*>U[^ %e["৤XI\4r BY5TF@3!(h1JJGB ,ےx8S t?a5"u7E]#tY;@O:=| _6"%@JX mk>e L[l1 Ypj>}Aw_ewݷ"6 qR}t"p8z3 -1mW:Tl cn84*DŽ`–\=6⨲jE~QUma[ᨢʂ8Յ.lT葚γ 0wa>N[Az|>O?R6j䒓O6թFݎ fv2nXBR} ;!IgLTsltC%:.,wi/&(XD{"ƿ%osF_ڍᰟ7OV}Pj'C <֕oi lQԎKp6(^,_8SpauC7L5_U~5FmœZiζ`N?h`@Nɣߚnc _R2w#7(tdڒMC5#ϯ ~ czRAHe䲹&f"0Y3;}Wk3go!4oB s?Aگ浗'K{5'I9Z'߾0cɛ_t:_ in%}y_N>PKF\5|J & dB8Gm]zmHڒHd 3âҌ;$2D)瀺d2ݼ4tduumLd_rɚ~oܞU/,vb`.DEuJ?%ZTp.GS0/#'?6ݓ҇_哷a 6ɷkhހ%Wpː~ S9\ncPR`4Y'XQN r,Nӈ\{(vXHSocLXmܦ )OF2|S pƳwc] 7d|6߇Ʀ䮜~ r T}?{.oʹ do.\ zm9\<[{Sck1P tR~Rz3@*mn#7ۋHuۺlww[. ^,ŎOcH#Jg83$C˒3t? qLhklwh(npEs˕ ~I,1`&#3Š%Sgg"Vg zq?N-uMS'E2I w.`i;w[3縻讬xgjf8vօE]ŕTLUJQ~F5kͤ/F}9UtO vHH:UK6%<6$~f ů_{#>%*sVrSӓ>@Dh '`cl DjsH1IFq%bB(YcA̩Q[N=c=dE=xpVJЎ܀@$=TFBbj#g6 0;2lzOrICv *ƭ8Egz@=kN,e\f)8@ϘI'2!@ UNj=˚ҪDY{}~UIoWT!U"R=T%XrαD3.js#@R(h/fm| qI3#c:#)%3B|GZ8nHFhk$*R\&9($'^ o%Ho=#o$<`*SKl_u ei*ಇAIH " E1@0xU1׏e||20>R٤cnSK{gP=ccC&qWw/Z摯XR7|e`*话{:;~Pğ ?[2kɡar7NxՅ7;GqW0t6%Y *U%1ø6xWG(cø5Jk^8>€KWO BGk{Nnŗl%+fƿ'q+w x O ɕvNgZsB}l9yTT-'H0 %RY&XF A`,S_@!3s(V=쪢F)QZv֢ JEY*(I|!] $,u|Fg්f]N5NC3JS8Eg(>qI)bXwždAwFs݅Umd ̷ww@˭N Y+S$qe.1;- Z!:R,VAJ82c,vC.f ཈kτQD3'>\WbPFwmN 4ӪK4 Y3):wॅF댁@@1NE^Z"UپᄌAeU 4V`RYS3¨HB$ׄB㴵Hg%ɊޚG"m|eo}2V'#'^::Ҧwvljj:~y! ?lQyA6_^wl[b]T_'iʳ DmQ VC\UEE `%+,e+HAF^}{JPP6HO) PXfD}w9Ì/ cHcU:Xd69U.&L"DHuuTV<6ſy30d6N k+\G~g,Z-ZO}@(WmĥسIs˩3 ^/˪(>8*?W/:> E t;?^\tZ6wf)&BȽM$ ?_tpYݧ)-a].rWj5xRmƯmӗoFϭ˰cZP$9ߙiZ`$ڧ4DTa\X (zh^-I#;ؑW{yX"Q^v6@ҩ@EdӔ;_b! NiZoZQ'J.bfhI MUlDLTD HR{ M'fX i63%sOϙDy+l#fÅ_W[ym꘰|d޴\3Ŵ^/b JX)/\ 2O֨LF˕u~ Q痟_v}<[3_zdѻU3o|3? nzx xw"W{l|Wod.]~G.̧ѾWxKU'Rk[ oK9f%YHBώs077p1Ё gHi`D2+Tfj%1G3rK&.C3Zmǘ<[coWos- rZv:߃5׿o诠|[ "l{&+}[BE ΥBB_45/n*%~ISzgC?)_.p5λ?bIkoHW9ޛIT)Τ㾘?VkZ3¥'vE ZAcU ЍR5MH:)^>MBҔ*%(q逨z@02Vz%8(I B$2_,u8_W%AR}gHҨ:ൟ>\pY8@f :#ޘ3,iZjbDMLd\6LdnM4C)GnFC)ҌSP wHp'-JH6\T Sv~O*ANO2扟4giPybK{Z2h|=7 %8;]6$UP(JM\1"B8 $)s9IDhIo= R~_"2XbU iis;,2J[dU>8Xbe8K>5+C dZ&K B%xKs!Nu׊>*xx\.d"huN!~;0} N. .HS S@A4Majaj@(iDaj%8f0 HT@]Hb4(x(OFOތ(y2 n1)4 8XjS<v8Mfh.s*J2;.Lw¸0PjKj| CB3 xĖ'~lt6 Ofao֥2ו1GC6ez0j,3<#~Jw!1]?lgIO . C&p'su@b? OK!iY^q( BKd]}~{]Z/tޮX-,gc 7ō/pq/?ڲNqcޛq{np?\UdUw=d ݫn4od4rqm;x x,OLc1keinL-^NjzVjX4Zc:Hvᤪ8TgAXIRRT&%U,FQpFժH@<-Co/s[ Ҿ3Qm˙Jp(5%!?S12{pY*`i5 R^g=J.2A3n!eAVE}2' 1Ն(/%c`P&Gϥ&UN*iU06TS}{v]> qI7F"R}YˀBD g FƱ\S OQVS}[0HAy(hB\f!`JG3: f`+jlhfk}F8䬓ocv C3yѽhfpqwT]p[f\J}fFHl{pKKʼn炮<|Ђn-j$7yHܤ0$C'@ |)f&l!Jq"vvM6Vj )9)BTjoÂv+4fA`G%c=g^XJԳHQWJVU* תp72>={ 8 #̍u)!cch284!Ū Z-.'Ti8Z\+nKgR[5(>%(x~u;u2or;Eg8L gJO&| dk:f&&, :i<.Fi'#z=~^q7k9%iCe.͘Ho)'ҧT"%.Ug P l~*EqeT3~&xT9/}ߺٰ7\_[RM86j`Tv%b[;J"OorTKZb)xy=XI"U(}LDcpyR?͸>vさp9IeE j:} 9 E 5oVeODž/{u7~(kG>,dAp+! EyFՓ^{砹r]x6E'&IWxW\ Yλ/=c{nb? {z۬ńf Ra'3Y.n>ڟ#?L=kvmvIFCHq;͎xhj ꏬcDC Vlp G5)-< ->Z͚slWVakX{pNaB4{>YH: 7q, rd YY1p 7:F[8e &UOF~c8yOuǫث4,ݰFS5aBƍuGn3 aowv M!Z; % BDRF%豏pE)R'VT *oPX¾1gmHFr! &$_2")YcIjlUSDIٔEYU]]OR3L@dƖɶiƞ1q]JDzFVXaClܐS&?>ۇXUi暠~㻬I|][vJS0yV9߽urUUA26Ϋ_w>#'xO`ٟ޺yJ&yM.}|||Uue1Dz_g9af9R5){B ݗeHZ)=t8$1H$9 9iL 6lX`'pW3܁Ɔ$bL˭]]rR%ڷIENqX1d ɑplGMyJ %lurجPAPΦ#uK-ueJr4q`x Z̉' qƸ[Yy4{h*E0;{e5W ]R+umxTY%O|(.vgWx_'>1AKm^ N>"j }!#2hoN $vXuQAYm@Cs:t[cj݁+qBEӰaU)a2Ŵ$7Y ";8ؽZdN/GEpc+ J7+"oqs:We\̒'f`3Av#X $jVFifLg跉#GS\|`7j!feޡ+<__Fn6l.*2<6GY)׵WlBlx9 "dwX;R'54vj RR?5_,\~+L nf}ĥ" r"|{ )vCB$SL8*+\{vT/[Z\ə-ˊ {9X8,o[E@^eUգN^ЄqݘkEѮlLS}^(9ʇ치=ZŰp6ɓHW-9Ӭ)ל/éxrA"f_m9CL.kVg::\]ӂ9sji\[9?ddb&"y? K("6؜mtR.sb̚QG#j>3D7z#"b<ōzAMt7a?Py G}Bڴ'cJ!%)V1-Iֈ%BkaUJWm 0 9WMnͥx~.|36bU!t5H=+$pn35Bٸ?ퟳgx]^-6m^Z1.; Y,5U:/(fV:wB1uX Yz-WQ.ʉvXAd' Ag=5 $Rw]ENȃ6 -.scngR8>G*5'?>k 1yhB)0W4]G,|dsd1 (ѷ8jt0h8 pSܱt5АRSBuDž nfY<ĘrrjH:s'LJR)$J iܑOSgLa 84w\~vN/%*!v{[W2TS#iv@7+dvZ}+3C*l؎u\hm( ;;DyhKg9[6R2@T'ufE,uv:0)SnLC*ac'6ւ\w6n[ )\Emi'Զ= [jm=Th%qW.|= kDݝ`jZ`N8"qnuL9&luW [pq\jM;=k꧙;uNcs|>UXX*T&IgQђI;:pXKb.ƓX/rJ>x<5@H+Ndw`{K.\*E[/X"vQ8 ^b&qi]K0ɴr:Cq|vpM5?2&H'[P1Pc+rS`)qıAYU4#^cRp7ojI5oSn Htg]vwkAʙ H46LR|w~S1j56^f(+݆ykeNC3Isnyx7etT\X'*J 9t8eÎg 2@qMم`Fx: 3L2Ϫ SSsVv^4͖eQB8(Hb(*^x^clwcufX7iV. 2eW%8O.GNP5FS$\ČU0DY>m45z)-Vu] >xϟm[~fT=q&&Tg#u@DֺԀ hǜ F࿫r NfVHրWhp<|ѭ5~̇Q &q2 !S*9(]{qsV>f~-^gbbvO$ln{?ê=d"}ws5`؜KCO{pO,XxU3İ͢ (Q8_o\g֝oMXA0_އ|}2w?.^=8-S_n{#P#G#y$P@ppqAޖ n=f`kW|랱,84 9?(WV 0Jb,A΢+#.{ 6}I޻^Lޏ_ɱ؇!i?`H >2ajDg! TVvcBw3,N'өwь-˶,B z6ds @<"<,۔ksИ7V1{A- s9Qwnxa(L\uV 2- ]X䵟r [8x@=90thpbꠔ9ȳH5'el‚,6' /){@`m "WH#)駞7$iqV?,suW ;rЃͯ?ҁbS~$^Mg5L{ ؔ)_ˈʹ)%nDTPF=wwv $\d[(T*vx}7{my<ͅrȇ+X|Y>WM%5GQ.RCRD#9jM \ špeU)ק$=":Jc.GH 0vy iK3n/cc2 $2*[K320FΜ7 S:ҌIR iF1>>=i{ IY3_/`5 b[.#4Uzv`%;uŸ"Jb4)y5i6Bs'Zj3@,!n LbWďC>"QQfsR:G"`b':R1qId [ync[^Ը4s#A d'HjuQ~R]f=`*#6`$G9.ľ!"C eCG!AێKh  I`at4 !G#0 4ҡ%{|-OUH{ ?I W0 7?7nQ<fG7h5ZҼq3 NM6~}J6*3HM07_e4[LPBREeEi<2L ^R?\ƛdX3e"L6_@ѳFZ b0/kd`+78OqY^N ޿Dh?5.gWb[]K臓sǟiYn9{ݘ}%7疶rR>r~{g^#TD-ԟ"A|܉KLpKEjY>bC\Po)s AKv2w3Cjɏ(XŎ)hj#E] udg؜hX^a_f7$oN)4F#7)$m95ѷ<1|7 ;_nKtX6)ٗqmf3.\i|Зp35a4c r͓Co4Ń^orw24߳1ML.:LN& ڨd9=wQYd.Iu'`336,&$>+.3[o?zwe=rH40e,XvW`ke˃YWgUeV2ffj#`DnΜY32[$hbM@\bb۳GBAp}'S{+'o6(XM:87w_>oO7]z-D]kAُ*Zk4ȐƒW媌+//ϫrEt2y-_F_e-r]KP -6QT>* |/yy!AP'7 Xk!9UDSD!A5_4غʿcw w}낦*(v%˃M3(w^Hy8=&#M Vna"yBW"P/ˀ)a(KJJ,X`3U5qa2L9Fg!, f2{J.{Lu^Rq6NІ#H^FKk`<2k:3YtAq!! aLÕ2F*]n?o@J` D^#*6a>߃SD7BBt%)J!;G͋-8`H1H.8!N OKkO;5\H. $>33'PEoV-xҁ% W˂X_?2m1J,$/ -#T.@YTjF+>Hw$bΫ+O퇧oE&(g%LDC#M jξ>, %nHmBPn8Ɗ9c1^t ~ܟ~Xmya$-zبw!\r,"s,%j &LE?f lw̘&.M0I ~ZC~] 1uX Dp\eL~=u\+勩nϷkkڦXogyJQC!V\.84sBЀ,3j a A^LU-ofȯ7O+,7JicreϊIE=X?UW+eo]Кξuշܫ9^!o iIkcڵ]𫏏q7 }d|2JŌ'"g>YY*ZٖY3왪,}R5$H4[[RG1 {94͢7r3koR1=S42mPDўɘO[f-!3F3={ $mYOγ#!Oi<9B|)¥@$o@w)#f*587b =K5R住Ly9X TaQ`u #:i`C_~k̻pVxmpR'EscƦ Bʧ9\TĦ=yaHBe۽Uˆ0hQX I~C &@nQ:0PAy4a}]7Y8贎*O5=weﻻ_^4īr1YWono[. 7ei_V(}e8xeM t"ObY!K*_>VA}&j5|QH%. ZT.S 1-* 垘<%\,B=YFeGP)jL<a:1`,z;z1R#+`Ŭ H3kF1Ja/֖K/Ah1&pLO Dj.ӎqy-,zf.hT3H͐hS8m*vJ7@ 5dH(dzgXL.$IL! 3[:TPh+~QS _wS!tȗX#EMv?먙^Ɣ!/7]=0$L'ýwFUYd r3,&#5F|&0^8k=5+7+>lw >nUꁳB6i).&FX}I҃Xލov_D/omLV\MғA%'c0JbcP?yBb/'J18]9GLϼjƓ/^_$7m ^g.Gqo7ۦ7ft㑃­=Z{$ggީ Au.q$iּg =g{Ͱ`պQ14B=iqN ܘKJ2zͰ-{Ͱ$$XCXɝP;0̪KKRT1iCgF"X) .8b62i81Z*LC°9;su赂͆|Z#;&)v1Dx0)D`l}yHLp^\Lרה`3VfHQ f<- ،3Ave\WOвȰ*{ᬍY̑s8kpV8.~IG{zx9sش}Š)~,h&L-fa@B>x],QeIЗ2A6>6J#@xs$VÊ2 ;8Rgs0_f.Y9P4Ϭp+⠡"fFc58h8&q!ɰTqZ, Xв UtQFCai@Q8K O/iز13@™9QZ3)UL`m1J Ii}Hk{ʌ/nίn ~cvU"ό\#8P 3HȰL"F,]W)eg,>ה.+p`nL lHJ rǁ܆ EI-&T1W /VE1vq3ʁT{d J*#cXI0%ϤϴS0M Q|R!_WXh+Dք@ ހp!v,4GF:aH9 |Ptpߜ^Csu,(|-.7H5BZMvJ?uKh= ksI6Eu9GIG }|.bmV6g Ɂ_v^208߽dfry߯)1))PaI鍐}X]bW,&(W'} /NTp9 ]4 /\exP|=J(ѽuB{P}?%AM@-5[v"d8JșZoA"ټ| KfZºICHpeڧ F,̷[aIKl=i q}C6:\ݗj' ,Gi,ۊ' Imo˕s ' nvsJ쮫0aб[u"4ݱ&7Mgۮͪo{QSg/t`aIRK*e(U|I+X0TNˬGK1MQ W`!橱'D_$#r^5[0G_ Az8VN[۰ȩh9j8ZDqSqs(4'}8D(QeH^N}QܜH~$A?nfn /= '-QT/<4I&" d8&X8.^$AT4QN|.>i\J.Igl#f|g/5K+e/xJ#rFD2'FSP%Qx*+4޷k}4_JNbY'FRQ>%u?wYǡ?;-eZe[l4e(۞S[sG~*QᕖԎ/w ZQӅ+ivNXai;l~ iK'ιɺx ʺq4sӶ)2g!Om|+}#P%`ɵ>lUV^zp˻q=*=3JHn(ܙGnsI\* )s1U^ ʌr!k|x%¼.VM?'V/(0G@R9,ռZQb60wp0GrP@qj 氭X*򛷦b Zs\܄3KP̡Vq$hrqUhSFyA7I 4]𫏏&t <#I9Ym+RJmP3Ӟ_kbNC[GK6:iϰϪk4ϦuϠ&:NZ޻x&rbDt Yׯ^ T}rSӔ Iu"sJ+B4)/r;JAnlb6}kgRIH2Ҝ742;ٟV˼G9RW9P9Z|X?Ƹ;,-erBhw58%Or( E')6Mr`M4m~$9Rt40d>,398o;:ܘr&L9IH42ژZU?+\M6Lz.Rַ|OshsCꠗsFs} /fl5ؚwy=I=&,w 33%0rIk#9',5N mo'92BvFeY!kknH'l *>d-%*vq^)`.)Ŏwk) " )K =݈)Fg@Q4"EE T3 jCW=8 nNvZ99 Dk']kE(^7xUw0NRP7kla)yY* M>10xx8ojfC0 ->p^M.\^#7yu`P~m,&ҙ& *5V`>JoL+"M(vǕchd)e0۟W'_\YG6CS^񡴶_5f}@ .BlMubH' )4Kd3=ɰý5\<P *;t 4/p+AhDh-`P0i 8H aXPi80 *BX$< KϨsH8, B-V6x€1T^o|`pN13A25Jk&[8$m1YI GS')¶)`[cB;#R5*lUp]0GBE`) 0 pkFf.h1GRbSqb9)'҉4ƂK̂ jBZ@c LhǼNV(5`F ZcDiV8u2Z=S1 /qd J MULwIr1E+r Xf_e&1 .,{ iF3l2;!ΫvK8x\zel_L&[aO~0O޳zۗ0ҡwn.'j|KFYIcD9!Z"+" N'KuE՜ԓQiձ{dH?Gοd<ƿE^XᾸ&dK-]\L;Y\L:Y\L89U\,D5Iz9Z,痫'/X"q;l.8 KZ ϼpxe'Fʦ:on2Pƞ3 x0DL!SHrD:Eu/Ibhp[Z0Un$o8ETޥMyGZ x-nn(i˸C'0oMY{<0S~&Ydآ;gv@l*=m^ !N(l6[QmogWõU|fWрiW!sK 14(BQ#ܲmPy^ih=cYO,c_TD~9 I>؇zFMLX{[܁(:ByPܞ8N!j;v4!'I:Eڻ&-B&9=@5s[@S ~cJ(!eJ8;=@z+@ CѝTre:-)J#t=ZZ!L9;zAvѩ}xʮohѝ]F]17k ]GSc暈՘}gmQ ;g17j pq|5fj]QKT_Y#j'Qc 1Zoj]yGXcL՘~5f SWcj̍Zbj̊՘sBWcsWcn^;jƬ$]17j ޹EV1w5F-A+Lֺ1w5-"B՘)bj̧Pcwf5f1w5f-Awbb|ݛ]j̠S]bIWcj̍ZBuױԘci՘՘)ւv5ܨ%h]17i Pzt5fJ׎)Ԙ~vt5fJjk N3KkwkG<`h7LPjBV \ί%@aH)U[>*s;io0j!޹YTN7L1~t{ͳBTp,G 4y4̦Wl}c憳"Dei!B+rYȱȞz,'3돆E=t2yB=FRQstFV%fa. h@Dl8^ Pֿ6 d9דqߕTwecN;۸>E^ A|A13Mr1eyb1})Uʛ*lnhFNwYCrrӝ2'!/& ɴbw(.B.wh/ ,Ds8w,˹q K$s8*C%y{ "5o|9ZLِ) |B;p <)$ ,L{=tD{$g $SI1td!;R*;,P+ m - &% Lz#%kh2X:xcaibOצ6ͼqÕSX; M"\#%\#$ +SiK_K=R%$XN/GfbfǛhQ_ukl;sK2bzc$ TC/ +?mlaADw`Xd@;~{p6_hoЃOq^)V32Oߙf9VJJ!)wήT %g p!ϼa35Ă#?3ِ`t +a(huH[*<kDsf9hO%ĺ  +x5Ԅ7Da6 |2e&V9LB\/WӫO_*'5*LH0(shf!H̉3xj^sTxϖAgaC@Hf5ݘ JD@e ڪa/6V>d.gRXKYp:zҨogW" ZYz>3 ^Ub9Qg11\׾#ܲmPy^ S5цCZ7uW5|4 ~#KhO1<':bs 3z砖," $+& 01Uɳ_)Ts6x'H899$>tC+?2L0\CtH` 椉D[og72 7 džHW~dTd޽>Gi54[ӊL@]dNa6+v]2w?/"}(@{W{q$\2lWl8n;HU!g9c67BO?-}o8WRy('ĵzV7_i=m<$W9%W"p82$Nֻz߹$W~]X}UXtw1ITyJbņ9y u()Ɋi l4kh/h\4/A<߃3\YnoF<4#t fdEGBi=,.]lLj97{{32EƠgѣ\nq<D9s.?Xx3}==3=.$,Hf`< g}Z'2Җ9rt4_?}?ju0x;{n³ֱXx{t׳_/oA *&j*bޝڰruCi./;|w8~OOyhv6 wODWj!^[ _>#ʋ 𠉜èayN=I ),8h%ztI0:~ĂLJn)孄$MmAI&E_EJ+bN)&CBF" }aw7b ;'d+E":˜dvC0& @.02%4V)Dweo2VVQFnAA.XpΎ%K%Flĕ8 Lv! Aڴ`Jj_`kNA Yu/L:{\ȥ aM(4g:VyB+7u}D}iܲjE>ةU"r._ሜK-XaD` 8͂ui,0^Gpk}w݅.V/4g][UQrR3P7oߜZYYE| {/0$ ĚVirYFg &7؛#xƟߤ߿KaM6} #5e~1o==Yg sOa-*=3jQC3zo>HSӷh[?+gVY4 bo dI!ir8Q#`B0PYNM .@%wrՑm ~7ן7̮w{JW#o-e/8IYYS !RfGI`G5:µJNa ^Pm"LCZ`I(BhAPsÝ 7\\bӛw7Yj[Z'ewV1>V3j#ȉT Xq^u_^|֢8 nSD~ĔYc:\y0Ffuf8 B vu,@#c{h-ВPX(<.$:_V/ƯHOaGL`D Aadbh!؈Dy33{gnu{rn(xX6lH|PAdTo*}{(zr ޽%zܼՂm*V'SajEM(2l00<#z(Zj=k GY%X5yFTޅ-J([%/0nYFj˗ELxǙ;ԛ1;nq{w&wMhyualY2DJvIԤDg{[Ş_ 9t)SN/ra[gnJjU FRTKTCj$KX{1>LyT<柿,uHZ6WZQT2z?%>  "Pr6YLtExΖ(yb2%r-ooͲNh'㷫1x<҆5V-(!œQhjx`>P"rBA?C=GOk}VQ6$_V"Rj̫m@NXDv慽49[Їun$ў~b@FK[.:/)g6$cTe\sxSvhfw2dH5Vء(H7+)yGИX-Y%x4 ,ٴV7P2mtחw%HZzfGq*/xqOk~T/?)cWM?dAوrxIi߄܈.pW*4(4<,{~YR$JL壪NTeq-qv INvg;Na#>޺^?=:S.`<ՌD1c|.MTs1LBА 2l?94YtOjnr˟:$1}gR?fgd׻[rd#wﴎ5ڣ8IkWҹXX,2J):*Lϒ#!ڛpJYT),K'Yq.%v{ !c R/;39܄LaN2Al-';9e $t l,FvJ%xJ)p@dL2x9f,+g2RĈ-s|W oj5 G< ԑ2mU{#@'yTchJ:Aȑ/fږ9M`#ev Iqxkzq*)?R90jT'6srz@%O;&tQh#'u Jf"ݐnݳ2ıOtU|.dzQO+ٕ֡|n^׷?kU:x%JfYqkp.7ec.~q_?"J)c覕dRݣb*)Pp @+͗ɇD4D&o)՟ܻYή|ճayuu3n7u6’DhiOD] OӲY5Tv)pzgZ"̗aȧɧr!kG}d~ŗP=<|*y+/jUӉ!%؁D7:K t~} Y?ॐ0.fA 4 yKR(/PT?։lCL$R$H_gČl,0-h ua 4 h K-x]x=HMu }\0dt JA*њCp&?8: s.bpXfL)6=q1eqV%iDp`E!ڂXWJ@J[+VZn ߵkv)/J>{hq Y|=ZܫH]MrZ+3nU+ڕt% e*?wiݨՀ V=2^VeuKa[5v> FܯxxCC=6\q B1_6:"0!avQ:9F]Trtr;FwѤgR%k<>|F7zA/18D%A}g$FVF'ռC5w.߶ ~HY.)_dB] EÐgF7gB4,M]8ܘ:ҖŷfysT) 3uJy&">=U-^`y`kQS0go0: 8ZŐX880+U3Okyh<3kC]h-e_!EܾBPI1P$j~hsio`Af^=9LT?NPǤNB[Z:V(lf7;jHh~πar(ùL~ŗE|q+R_X=8i$f9#[cQ>-r`6ƃ~1}(d]@xx+e00`++]50T3W**k}i(؛e{ú}QqhuXGi9L_=&a= _V9'}c/dZuK䟩W WLJb*hX\4@]xJ]T)Olyݽ߲,(hT:eyqo{mGӲȜפcYkP +Pn9˲VQSAbQtсK,]'dYN%(xlvw F-^/tO[ hJhLك@6ڷ1оm3nZhBYcŵCXpAJJidur/J6%]'dJN~H[K&77)IWs2r۪EQb?@O (L̑ubh,ɲ!=(~ Zv(G"GM^lK@/^9)Rp؂@1`,mK)ؾ)0γK Y"QVrhb`Ko }nו BVv,i3Ϸ-&& )p2FE.Ei*z2FrHwc9Y P.gK;p^ S; ɝҹ.-x\N`+Du3ui$6~|Ɉk)p<aҮlRڵkIU4*le# 4i]t(+Q֢NՂ_)-]-+OVY2L4rZ۬ ;Sׅ8lɷLWZt+[z=vE>0&@^C4XUQ}VagV ~XԣӺ =o<>lƓrݻG.?{X:4z:<yZA:5ގղ6l\ 뭶RN꾖u&^"^F|5GEv,*]#)C]H|P"V" V}u7:Ix7RʰMG@R2 THċfL"19'3\Mzht/*)'+oㄜN*oo4mp l.֎'>9{;f^w.( j_Hm(eIҁU)VQ5CLVnׄ93Yz؁9 :UL%4}-+0/h"$6diaemiP(!wɨvOإ$>dJ./ac }͊U_ L|%  x'/|\vsʋAmT[RY)d%*[θS׵Bn1G)7wXs^{c ZVN CR K楫*+J[{L'.=n"6YTh_A#Bi$p] CIj "`x Jl*Uץ%RU\&UHIh9 +A9M6(JAWxi%I/I&?K%Y庤̏؆̚Z"JUXtiJ@m ZI9FsXz8E&7˺ecST 1D!`+ I05j5HJ`:reA֗*2fu)Pjɬ$lL G% UF9SKik:ԢH` 6aSx; ^ʽx#5ΔŴۓ9{qҠk_?S0 x?_7@-r=H>Q RfͿnRO b4MǿxhinŸ^jyokZ+NЖ`yiH֞8SpnhéEpIvsپی1 ek#C,u?S&"ZuyKJ&jo+F4h{dE`VZ^yg̘5ƂI`c$K$3G1)-Ed%lL%*ƍ{$ƘXF2{FK W`<Sf(L˲GDe9%hB(<2Afİ!TdHB˜ ^Z `%0R+KANk!|e1M PvBC^:FP(6t4ZC #BXd:ZziAFO\imPkuٕ$YV^_yoL"na;k 7L)zvny]ƝoҳXa=CBeUbc*'r7lJec7>43<o5 &P"7n"SE 䰫yFFJN$(RYw4LV%#wBpBceEtxgLtPȉ׷JRy+,[2*i*gvs}N$wWMK6d||XZȐ" \҉W>CY񴌥R:,].nz|TtY?a'_ŇޝV<{gO?*t6߬W|{OdW3dm/9mT{@rfOQ8Үf)OM//nQūn46nkJ Ԭp yXI&%x*6Gߺ˸qK)\b5!tW=|uqδ&;J$^N,yʫ@uF47`zpܚSӮmUN)OYkHf {XLWvW.WxFt׷։jV$\j6nvsV3eUɮA;|B2nLp0炩r+ & bunkVC5XP^Xh5 ?U 9^8ro}* 3guw__3!j W1E 1ϭtc~?t:&81yGE.{7fʩd\lkb2/OM̊8r^6zenmzbPHnX`|0`弓 +9HɸԌ#6wʄjgk5߳|X,ȴyE46nu_֭& y'r$g#z.4&2]MUOe(']֖-%pf'0UۦՊA6Ψ t Dqn27&TQoGDaѨ+->>wZpN=#oELN`6Θ_QBޟ-QuX@x;=y3Ylux\dMk7jȺ70ru`<-{+x44menTNͧQ 2F`d9&k<\!Ђ`4xB\lb o,ٻm.W|(}cx6 dbO;hf$3-ЅJ%%ٖoVu Ll"CޞVCiZ1RO1:`L=9wZ~b&vpDJ毣b࿳'gq׾鬐|-t!.}be0Eꅀkbş0e@8h9=_` 2N\|rE ۂf :ܓ}Vb̭$1wT);Y[ke5*S^1i`P/g ԭO5b1qPV^*IerhvYYNc9Ӌ}`_ Njތ3pYiwE=UfO?ao;w7ك?oU~EOorJ:\hy杚j{`EmCT Noeqɯq,Z%JD ,nێ8XPi cg(öYUxA#8Gi$x6Z7Z/ YFV(ӭ?UCh<]+eH24{f<HW8wNr8>D{0*5ƛȞ>Zu9CmeO::ASsdF%]l:anqk͝0it|zztot|]]{̾Lr6[->Zc% mse]ufҞNo䷧ KUk돋?U]n/]K)]?hzEIJObx<~ ;BUO`0{Y^7PD0$,T Q )N$W.ʧwt6)7tRNYN=_7C1b1Av=av=aKn%rRC:.(mdum:\zIjJ@],%2S,ZPE dtIn WTYeP\͹vu}]B|j: PB=!7PcC%q!Hs͡dewSX ޤr9bk}2 mYn\?'twChD# G,lOHF묃a dXJc>0,1q+dX)chg5)bJ#VH2a!=fzqɱǁ }qԪP2BR"0t ِ$RF/>G"Ӷ17mftin15"ϸI zHAf2cIGyr%W*}FRӝ1Zq~4lvbIڕڱ h V245r`z'>e |2Ipwy mTY+6s1'fhrҵڕZ7B5Fv͡)0j!u<ݑ'2GГM4)PL(SDDUMjVGZP61OQ&G1j oeFdo}8gD!P@ݯߞ[`E;[ev7VZŔԗjd7۷ZTXX&٦-+Q)Jy4棿FG"tI~M یч=E3&4jP?{'g_4m{يϳ-cyڪba.1{MZIkA@C"vJ4V|< /#V.F1t۳mF*Ek6w6R¬XyeP@ T8Wt ~J1O)6R&LcTt#) c)&*.UXq=oع%VnU 4*n"92JtlFMvH6n˜{prjDI1K̀pͺo=U89:D+Lћ[CDYTSFUamYųLM>Ih1x~XH&<Ȑa*:Pe;P|Y;f;P|Y$:LY~Ns:mkW8%U^p6hِ>TI}|B pn.VA%A@/_<<<$@I^;{8" ~)Чbēo#C/jiR7TR%֕JUByaؕT=/l1Tyj=PJ% *U ,udRe TXܯN%u?,ާJSqTnSb}/fO0T:LT p{ݝvTwK(SC.^jPUe9ŽAO$Tp !帨 8;P̲\lW'fVRߝ)&2WJ t8o=Xް^cr\ /+g9B*Hl٪#^N+ @{Q$ayI<w |a(;.d>#IH0q\mm"jm)pU}s)PsN\J]/5bč)5OrIŃI^<0xh$F( čtksxV<O3=xsa|>6'u" GA2"u,7AH*A-CG,%y(E(uUTl>7:=-lFHlESn2bY j~崐&MQCUUL4 |s58eP4 b]τXrzaW9~Lzyn-fR$4sR [Hj~/CuiRFLBs).4='hMNgĮƚ6pzFHJRNTp]ǭ,"w7E&(qQڝ{qa`Of@3ST?YcJuy(0PW#%ϸt'U h+ U,HiA8( lfMc%ƞl!iBd ]N[/.,cBK B 5ҕ2QZ JoO-_y}f^#`eF_],"D$ Qdc_(_>c J j&ܹZ }|u{HԈ0>Ti]~񷦤4`6Wץ |r`&}l/-}\{tsK68w<Sx.t # P91L|P '& ܅PXet5wMnZe*S}aMOUS.;|T\@[炒E ܬҢ87v-(6R =!wۭc#idՇljsjJ)^uY*bǎv_?O^Plxт`!VEٞ.}z\D"B?iOٟ~҈'`0.e 5<1jC+I][8n[Xz|1Y6o>P"ݏn p }WIcRtX=a,RV!Ag*&DW#RꦢY=QND)-;_F SɎEzJ2Jʟ1'؛ HsƿEbR ܾg ֘CXj(/ E  `HLB$AB S9 M fIm!)ٰj!NR&S%j,湨% X ϖT6`e f+>NfQ8ЩD"ZpbS)cmH) C h0@ I|,ia"Rg"2q( JY#(c먝3)0U`mˑ'j0tոw _u5L]{);Г.$o*7⼝Mur]<:G!BI8vU)*0 8¼޴yH¤jXVL5N3%;TM.R+Aܱ~C&KxyatbJ&~zWr~+?g>7~z'Bd{ODC3ܚiQӯOь5tq?@Ȕ济L%<tvWÉh̥Z@9ZH PŁ)Që?JuzܑQ-q 9"zB>8%n=8"y=_VÏ%ͬ-ڻ֗30Wu~կȢhRGv`h8@P)עb 2Ëb_D,,! G13: %7f$I4ՉHSe3q 0"LpКb%Y-ni\~ %)1!4XPUˈ8f0":ED =4f.xay7 ?1i,}6X(R}>Ghc n$CL,jGD+ P&FI$CI0d14Qf/W#[1`6ԛb<%yeȼexm}W-Q(_dx ~@n#řCxxXn8y xwJPpOoJ)Pvmg^Rd6j:͇D Gf߭Sxm֑Y&sͥ:|F0!~#秩rcT55Kř8t 1%Yqx E1JBb%b6&BGHPx' cϭk;U`A.0q.?{c2l#. FD>Cf:RHbbHXHq QVԘ2IZSG*B5M]daY9rE9*%8O,,0(<" ҍi 9B=^%@@9|ݬ;شXr`J\]~ ayͭ\F'k &-tq8R㫹Ƌ*k/4fYo_[>8mWrĜ G~d[v P#,mzb,Y) D&:J]Vٝ4$2R>w&.`/4d53ZfoO7p/Zh. ٴV>\)S*i0Q) k?N6LiMp()2ɱ"f?e$Z6Kvgf2B}3%Ы]9D?-b|Gcߜ&6ӷu=^>]{isv??`pTLӉ2-%lj8R#׾-*;<ۀŗC+0*z 'U68EďCFPlSJ6^aNG0!0l)YSGD:*[)(6nֺZoΎuQ%(;SzULRV.eq*tӣF%E$z}ᯋ>M}nCGIhx,|cޑǼuHU{GfRe-B(:ɓ wK!Ng.ʴhTy*UQpgK|!rq]'5B\Iz~APiq8iJ ]}70%51#PwI5SDPG .g欒s^΍#?Zd0k71ʸ"$wচx}j(U_\Q1J+樏'TF*C4'Z2ˆdش>^KHmq7^e:跶 K3G'KpJ4׫6 EvuA5mi[.[$٧VvIKꀌ ',ĤjQN`>D,I"}"#V:Nxw^;W5WsZ>U[yAC~ݾLds Xbūd4C@ ŬrynL59SQ$1d(1Td:i'(,fi4 S*JAVP!gRprEX06IB1QeJtL7 DX%Eph=P%`jK ]+yzHa41֞'h8JV>mn`b],d|Ϳϴ|9N{˭AK* O`{^x?^P ] ^~@|H>x:Я*B6C IhgS_r;?M.@.>S+PcOo^acxY7i|nnmCXQ7L8Y~@q8Dd _,2~4K: `>g!{1Ҙ3OI\&gT9P񂊅fzJ2du ghꐡq#σbƎScW*2{N{)oy*Ym[.F]3q9&7({&djWdhəK9룇c>L○ŖW.cEmNB,H7w$J`Aڈv}Emv䧓@˝$Ъ>;qSj{64ϛ>tIk-$ t^8Cu.%;ޠx wvc ƛ0Q5f{WڤƑ of>pګ7Z coV7 4nh)ʬ#뇧?y?}<7t_~5ĜV3W!MKvx{y.]6ʵ:3[aR{%A?G+DVذ ,JЊ*ΌV"4?9[hTIϟ!BshQxey=а%O[` jP'DЉ>K:׳=jhTUq|ŇFpkq{!$qxqw{ߪA E kBU=qF!2$-5 MsFuIjD4$4imQx0Sm2l E+umioR;A6iv9N5銒 cVr7>9EzeH ~&;a32Y&Kh 46vNq:NFqCV736 _!:Kr*0.; Mš tEvX눣}+σ-Aŝ4y@ ZÁ@󥟬̺ț8l&OaABxVoפֈr>xQ"KqLgN:=v;s5zg׷~O黨`Lw8#oBVF ynR*͘?ѝw͌qFVB¬0?bn޼>rLJ)\l%P)[Qok6IvKv]b2wϳy2j/ EQ03}?Y)S[Ŗ b  R>_>c&#g!< Ad&(gXuWr;"4/ qX <c_w9G6^%sfg(=oSÑ5ÊXElNu̧VY2Ozr'_&z!aք}g qa9x4|;gA2ф!7KA J(yֱ%+XL2z:jX)]bȣceɊJ#|u#U\Qv@+i#r]# ;c-WEyP2 TH7\'Krk#NW,]kd+$ (Y~6O\ӒBӮ+}}|c2:|߇|PTKt3Bk*)!HS qj?Ēu֨pc LDxr}~ψyKc+959SDn*_C۰E%jwsKpM]8L&اnn}rV7*34ypx—dA,|IwS'}>zďC~>$to1H(4JiU3Xxn'B+q0Tj1Q0x]ˮB"k\($Z4yO I8MxmnZ=muku;Q鴶 1x$\!.(# z}`:";uѾW@=h|?ha@*Y*x8RmRUV5A%0ٓv]jB l>xQ"+LgN:=v;s$g.?> Fw* &޻laF*cۛys͈#oAgQd2aqR[XcyKƱ(¼nCO{qAzp}>ړR4aZr_=7WuJbY8DhM6qQ_ZZwuy5F nLsg4~~^rް;8F1'؛ܤT1Tu)cr}l^}$rnno]=%yf8}^̰3uϝ{x nx1)!M yd&vVSb-}ֽ=kto^W{!ER5󄥎%xPb c!<U<0 x̼95U`K.3eǼo 6fK-U蒠l,%h]#Y?| d ^9j5P, Y qRKbe`lE{%AH1*cd*u(ByovgGF aL7kUጭiT ƚPc&@h.RZD-~:)J(^yfLBrQuTQBWG<D_I6oB"Q|H" y1hUJ J-cA-4#}jj٭Bʇ401sȄ4%4`-뾾dV12ƅ Dw&.LdXʺAb{dv|rwْJ@nu?]{u^&LrA%&]7^OGywRdvd%~sZWd|'.8}zS/";Yvv]TrK{ BU@iDd1섐.<.S|2wm36A'Bx!i5=)2A9Ē Z{ō|JcrGϳyO{ *?|.zdyK77LTv>jقyVYpVj纴JVX#ıw<W;tJ2AN ^%ʦ& 8Miꍣ%[R|e~PFDIڻ!F:bh.-;%W)M]| }}֕8"J0хy ,Fjߖr.Ҕީ>%WB)2zX&-fCX]`@R⌦N۔JmY8er\%oЉX]Lq{ )NP680I<Œ: `i-`#JCTQ(cJ *e@ `( aT)xcPyqi@uYI2M.׳Yȓ硁O^2  &ϼ ڂU5b $֥"QR W@7xæ\b 4ׅ8B3PizHU9i" dH&1$8 C7d5.?CPHU{6vuq[lm&QY)g6}aX',(m(}`!N {[LAT0F:M8&ලv.MYôRj"C""qV$@a ^l}B$HhGqEv$,q"O5ku t֮KHk1Gvcs6YZ8+/ x1j"BZx@:+N؛ TAdO} $ }_)aZMmb ANL(I$S *oJvU賂s_RM)_A\B̅4*p`̳v`Zܩ CQGSPH6R˄Y )Ӡ fe7R#1a/ɼdi&o01}: Hh"7k˥6Kduko e׭l\Giݱ0Lͅ>N9ePHR"TkRe`F3d&m7iS_zd $:P Bpg7[>/ "W D,Tia Er5.$QFtd^iӱqӪXp 0j hȗs}Z e"-wR qq9 *}XL9JXtb =AbD:$œ+]I$WݸU# U#b*Jb! cWs|*4e" c o SH?},& +`XjSL!0<#M]0jrVb4%e \:F)ɥ5_Η~+Y9'#VAPhp6G*|ͧR)KRS 0kCӮ'o 6 6Ί0DŖV:]&T<^}$V(5SvkˋvnUwҋA90[#(r{|E +yG_?^vxpķw~x-%K׸e6rG&ߝ}>\|C⣿Nwϳ]-BՅ7٩WxJDhTDjT̀@=ukq}]iLc rUx%sG=p|@ e&A@0hw#^VugE"8Y$|/~z"kƎ7׎W+椌W-Kdw~d (߾Qrvڨ#_ Ռ)#jsӌ_չ/.i]O_]^8 Ut{<^>zsׅ$zL#>n+X~a_O `lw}w|KrFRrآޡrf)Ee~t!fvE֊PrV=ׅs=_ .2fug 9VN7Ʋ)BŧX .`> Va ϟyޤsDQW.# 3T4 V `Z?Vubl,owe!} 10/#H\-><Û\/(e3û_J҉^"trtdZ]9m1fVEEI!^+n@#bYpqUݪrsօ.*O@+x-; bةwv3 C^$gymU5l'o+u[KRH_jI@2i0F|D06JKFqg+ǥ/mirWrКzJWV蟚I `6fnF\g(@ xQ{[px KV`I, n,K_D'K%g90*H gZD) x/~X>9/cdx7 3gQ:!i%I X@垒ճ RET@ږC;m73ʃPi#_ ͟ gU/.h.&\l[ʕ&y2f9%gn'WwwhD1 /+  RL|r'W4%D?N澾 \3W~VGx\͢~ =[&VȞRcytSUTr*aZ`d^yQz`7w礉fP͚."Gd~zG[OO_  aMa0t/Zq zjR:P o O{$=)ЧѫH^egtC 8|SvKBa5cnm@k v#6;y1lԀ\B踕jy2@h @Y@/\ۇ*[gf12@Ş@'O1Q4@/!oz+w~ |и7p,(w*%BEIZwDs J"-F6uv=|Qkvnc5e9>ƋB^d=KQfnfӣ1j6E(p{(nn&bܑm  ~>p4vPu!t0k x7~wWwU!iU) T׊_\,mN|ȃ@'Օi>p:ݴDmJN0Yu.?ۻyvh*ݯMֽfֳ̏0|-U.hY72L`Nn ZnĘN3Rۀ:zSv-2һua!oDl #MF -}Fw;dy#ޭ y&fS,Ҧ،)-n);lexc' g8=0}hfZzH?T@Ҫמb|JD^LҪET {pR˖6&8j//pcTT B$M0Q{x(0c!m{^ Ay#X޽P6*Xx!}ch(#<ƚO>[5:0%€% e$7&,k8#8B SVTꉜmV{8?/ "(T Wڪp&<- U 0Q:-XY*W0ЪIBYuO|'x~u-:#᫕ۏW>꫆NI UA=BI6bx!3y%Ļ$}ԸǮzV;vv-U^[DZU5p ~&oֳZPUtDug'0ܥWv7lu9_q@vv~b +M,M3/'8Vq4Nj3(XaDԚd3dN@N#=Lo%B: j_Auh{ceQ QNM@A)}eȝ]U(0 j^ 2Yk gl8Ibp, J]34 MU,kUBqQu_#Gg`ЂxZŁ ^t^+YN"M%tj(E$bdYقI_F6$#3 ,{ PS-;< ]{Oi_H&%wQc^MSW}kՊ[{ޅvrk$-Um'%n6ɭ?KW-W328rfk=e`R|HͷNs `垔zR0FnPC ؀dQ5$rߨLUiͫo~RM_'1cWG'2vgv_6; m~( ,(oSNgu-갟n-J|/+%8dQ 4|;֑k+b4$OoGUq?fmB0)?3mv$+{{m#q$=K亘߄m4#_=tyqf ݟn4FNjoÌ.b"h 1Z;zB?|ЀmK20uE[Œsg}|>rv2d= r eP\mQf `]qRu5FQ,plhV#qLط+s⮇cxN.a=nZ:G]<z8$a}qZ>N֓4םs1jp1'l? WF39p'2wAGH6$Ŵ^J0pA8?ܵՙr ]34=@#H=wwQkbQgwa?j#ŮfQUZeZ6I}hQ ϽO60 J\ b8sBy8tj`qO$-gM4Ǧp_J &(p&QH">XE:l@6X,䍛hMI3sӻ=Fbc:Hn"D{ۻs3Dz.,䍛6w*":IwWL/}WbVwy!*a51DIxjf+:[ڒ'P`kSpeIʧRRz5 #$5ҶؠMib-M- tH'k1n{}R ZJqo+%- g(C`ȦAo:j^6#-wfattQk4kvs˥r-WacCzLJ9K$3NQ/.LJu2?y/bR䀠؞%lF-ƚ$ȡ5l1F-0A72_I]ͼ"&pS:ߞy֒rY4Vln E43/䝺Ƀfu JJ_Zy8_Ls1$8_^ ւߵ؁J9<;svar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003500123715136002771017700 0ustar rootrootJan 27 00:05:53 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 00:05:53 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:53 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 00:05:54 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 00:05:54 crc kubenswrapper[4786]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.897660 4786 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910199 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910245 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910256 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910265 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910273 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910282 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910290 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910299 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910307 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910319 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910330 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910339 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910348 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910358 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910366 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910375 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910383 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910391 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910400 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910408 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910416 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910424 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910432 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910441 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910448 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910457 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910467 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910527 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910536 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910546 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910589 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910597 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910609 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910619 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910628 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910637 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910646 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910654 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910664 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910676 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910684 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910693 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910704 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910713 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910723 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910731 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910740 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910747 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910755 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910763 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910771 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910779 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910786 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910794 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910803 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910810 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910818 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910825 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910833 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910841 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910850 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910858 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910867 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910874 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910883 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910892 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910899 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910907 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910914 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910922 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.910930 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911089 4786 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911108 4786 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911129 4786 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911141 4786 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911153 4786 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911162 4786 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911174 4786 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911185 4786 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911194 4786 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911204 4786 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911214 4786 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911236 4786 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911246 4786 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911255 4786 flags.go:64] FLAG: --cgroup-root="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911265 4786 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911276 4786 flags.go:64] FLAG: --client-ca-file="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911284 4786 flags.go:64] FLAG: --cloud-config="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911293 4786 flags.go:64] FLAG: --cloud-provider="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911302 4786 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911319 4786 flags.go:64] FLAG: --cluster-domain="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911328 4786 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911337 4786 flags.go:64] FLAG: --config-dir="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911346 4786 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911356 4786 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911376 4786 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911386 4786 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911395 4786 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911405 4786 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911414 4786 flags.go:64] FLAG: --contention-profiling="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911423 4786 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911433 4786 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911442 4786 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911452 4786 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911463 4786 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911472 4786 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911482 4786 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911491 4786 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911500 4786 flags.go:64] FLAG: --enable-server="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911509 4786 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911527 4786 flags.go:64] FLAG: --event-burst="100" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911536 4786 flags.go:64] FLAG: --event-qps="50" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911546 4786 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911555 4786 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911586 4786 flags.go:64] FLAG: --eviction-hard="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911598 4786 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911608 4786 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911620 4786 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911654 4786 flags.go:64] FLAG: --eviction-soft="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911679 4786 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911691 4786 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911702 4786 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911713 4786 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911724 4786 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911735 4786 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911747 4786 flags.go:64] FLAG: --feature-gates="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911762 4786 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911771 4786 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911782 4786 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911791 4786 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911800 4786 flags.go:64] FLAG: --healthz-port="10248" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911810 4786 flags.go:64] FLAG: --help="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911819 4786 flags.go:64] FLAG: --hostname-override="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911827 4786 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911837 4786 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911847 4786 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911881 4786 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911891 4786 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911902 4786 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911911 4786 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911921 4786 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911929 4786 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911938 4786 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911948 4786 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911957 4786 flags.go:64] FLAG: --kube-reserved="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911968 4786 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911978 4786 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911988 4786 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.911997 4786 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912006 4786 flags.go:64] FLAG: --lock-file="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912015 4786 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912025 4786 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912034 4786 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912047 4786 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912075 4786 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912084 4786 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912093 4786 flags.go:64] FLAG: --logging-format="text" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912102 4786 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912112 4786 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912121 4786 flags.go:64] FLAG: --manifest-url="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912130 4786 flags.go:64] FLAG: --manifest-url-header="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912142 4786 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912151 4786 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912162 4786 flags.go:64] FLAG: --max-pods="110" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912171 4786 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912181 4786 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912189 4786 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912198 4786 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912207 4786 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912216 4786 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912226 4786 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912246 4786 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912255 4786 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912264 4786 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912273 4786 flags.go:64] FLAG: --pod-cidr="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912283 4786 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912298 4786 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912307 4786 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912317 4786 flags.go:64] FLAG: --pods-per-core="0" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912327 4786 flags.go:64] FLAG: --port="10250" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912361 4786 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912371 4786 flags.go:64] FLAG: --provider-id="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912380 4786 flags.go:64] FLAG: --qos-reserved="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912390 4786 flags.go:64] FLAG: --read-only-port="10255" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912399 4786 flags.go:64] FLAG: --register-node="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912408 4786 flags.go:64] FLAG: --register-schedulable="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912417 4786 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912432 4786 flags.go:64] FLAG: --registry-burst="10" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912441 4786 flags.go:64] FLAG: --registry-qps="5" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912450 4786 flags.go:64] FLAG: --reserved-cpus="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912473 4786 flags.go:64] FLAG: --reserved-memory="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912485 4786 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912495 4786 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912504 4786 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912513 4786 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912522 4786 flags.go:64] FLAG: --runonce="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912532 4786 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912541 4786 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912551 4786 flags.go:64] FLAG: --seccomp-default="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912560 4786 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912615 4786 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912635 4786 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912647 4786 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912659 4786 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912671 4786 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912682 4786 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912691 4786 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912700 4786 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912710 4786 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912719 4786 flags.go:64] FLAG: --system-cgroups="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912728 4786 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912744 4786 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912753 4786 flags.go:64] FLAG: --tls-cert-file="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912762 4786 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912783 4786 flags.go:64] FLAG: --tls-min-version="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912792 4786 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912801 4786 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912810 4786 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912819 4786 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912828 4786 flags.go:64] FLAG: --v="2" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912840 4786 flags.go:64] FLAG: --version="false" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912852 4786 flags.go:64] FLAG: --vmodule="" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912863 4786 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.912873 4786 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913155 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913168 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913190 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913199 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913207 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913215 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913222 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913231 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913238 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913246 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913254 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913261 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913270 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913279 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913286 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913294 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913302 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913309 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913318 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913325 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913333 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913343 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913354 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913363 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913373 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913383 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913392 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913400 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913409 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913416 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913425 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913433 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913441 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913449 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913456 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913464 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913472 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913479 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913499 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913510 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913519 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913527 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913535 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913542 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913551 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913558 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913599 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913614 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913626 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913638 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913648 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913659 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913670 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913680 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913688 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913695 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913703 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913711 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913719 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913727 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913735 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913742 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913750 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913758 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913766 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913774 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913782 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913789 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913797 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913805 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.913813 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.913826 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.923850 4786 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.923891 4786 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923974 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923982 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923987 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923991 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923995 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.923999 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924004 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924011 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924016 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924019 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924023 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924027 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924031 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924034 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924038 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924042 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924046 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924049 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924054 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924058 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924061 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924064 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924068 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924071 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924075 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924079 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924082 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924085 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924089 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924094 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924099 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924104 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924108 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924113 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924118 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924123 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924128 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924133 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924137 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924140 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924144 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924147 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924151 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924154 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924158 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924161 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924165 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924170 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924175 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924179 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924197 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924201 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924206 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924210 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924214 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924218 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924222 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924226 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924229 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924233 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924236 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924240 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924244 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924251 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924254 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924258 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924262 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924266 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924272 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924278 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924283 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.924290 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924468 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924478 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924484 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924490 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924495 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924500 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924504 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924509 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924513 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924517 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924522 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924526 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924530 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924534 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924538 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924541 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924545 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924548 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924552 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924556 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924561 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924578 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924582 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924587 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924592 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924597 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924601 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924605 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924609 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924613 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924617 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924621 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924625 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924629 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924633 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924636 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924640 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924644 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924647 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924651 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924654 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924658 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924662 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924665 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924669 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924673 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924676 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924680 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924684 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924687 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924691 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924695 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924698 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924703 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924708 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924713 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924717 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924721 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924725 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924729 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924734 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924738 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924741 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924745 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924749 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924753 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924757 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924761 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924764 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924768 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 00:05:54 crc kubenswrapper[4786]: W0127 00:05:54.924771 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.924778 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.924998 4786 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.930802 4786 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.930890 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.932341 4786 server.go:997] "Starting client certificate rotation" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.932387 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.933736 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 14:50:42.244917023 +0000 UTC Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.933897 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.956961 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 00:05:54 crc kubenswrapper[4786]: E0127 00:05:54.959277 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.961448 4786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 00:05:54 crc kubenswrapper[4786]: I0127 00:05:54.980309 4786 log.go:25] "Validated CRI v1 runtime API" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.014603 4786 log.go:25] "Validated CRI v1 image API" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.016939 4786 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.021480 4786 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-00-01-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.021513 4786 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.048635 4786 manager.go:217] Machine: {Timestamp:2026-01-27 00:05:55.045465031 +0000 UTC m=+0.529152144 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0f9e3400-2828-40d4-9904-504379bf40a5 BootID:c3970d11-79bc-4b17-85f1-58a9025c1bc5 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e8:d1:35 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e8:d1:35 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1c:74:a9 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6a:4a:a2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:42:33:c4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f4:1c:ff Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:d1:4c:73:2b:70 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:4e:77:99:38:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.049613 4786 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.050213 4786 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.052861 4786 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.053221 4786 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.053278 4786 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.053745 4786 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.053767 4786 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.054410 4786 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.054472 4786 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.054926 4786 state_mem.go:36] "Initialized new in-memory state store" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.055063 4786 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.059518 4786 kubelet.go:418] "Attempting to sync node with API server" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.059551 4786 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.059600 4786 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.059622 4786 kubelet.go:324] "Adding apiserver pod source" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.059650 4786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.064855 4786 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.066173 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.066513 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.066623 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.066703 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.066725 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.069256 4786 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070627 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070660 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070668 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070683 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070696 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070704 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070712 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070743 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070752 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070760 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070802 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.070822 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.072582 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.073212 4786 server.go:1280] "Started kubelet" Jan 27 00:05:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.075869 4786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.075518 4786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.077890 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.078806 4786 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.081014 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.081084 4786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.081736 4786 server.go:460] "Adding debug handlers to kubelet server" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.083857 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:11:31.278855584 +0000 UTC Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.084380 4786 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.084408 4786 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.088265 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.084867 4786 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.089092 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.089189 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.089447 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.089499 4786 factory.go:55] Registering systemd factory Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.089563 4786 factory.go:221] Registration of the systemd container factory successfully Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.090058 4786 factory.go:153] Registering CRI-O factory Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.090090 4786 factory.go:221] Registration of the crio container factory successfully Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.090176 4786 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.090206 4786 factory.go:103] Registering Raw factory Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.091142 4786 manager.go:1196] Started watching for new ooms in manager Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.088650 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6dbb5c02a038 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 00:05:55.073171512 +0000 UTC m=+0.556858565,LastTimestamp:2026-01-27 00:05:55.073171512 +0000 UTC m=+0.556858565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.092234 4786 manager.go:319] Starting recovery of all containers Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101738 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101852 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101879 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101907 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101927 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101946 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101964 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.101982 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102004 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102024 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102043 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102062 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102082 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102103 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102122 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102153 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102177 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102202 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102220 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102238 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102255 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102273 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102291 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102307 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102326 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102344 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102365 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102385 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102457 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102476 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102494 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102512 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102532 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102553 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102665 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102688 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102708 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102727 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102745 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102763 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102781 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102807 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102825 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102847 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102872 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102897 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102924 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102955 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.102983 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103010 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103036 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103064 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103111 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103144 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103174 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103200 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103226 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103250 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103270 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103290 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103309 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103329 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103350 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103370 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103392 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103416 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103444 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103472 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103495 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103515 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103534 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103553 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103604 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103624 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103644 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103663 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103680 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103698 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103717 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103737 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103783 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103812 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103832 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103852 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103871 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103890 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103909 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103927 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103946 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103966 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.103986 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104006 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104027 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104048 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104079 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104097 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104117 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104144 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104172 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104199 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104228 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104274 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104294 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.104318 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.105680 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.105807 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.105845 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.105868 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.105897 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114644 4786 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114752 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114797 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114835 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114874 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114909 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114954 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.114983 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115013 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115043 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115073 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115103 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115139 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115171 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115203 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115232 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115276 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115307 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115334 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115358 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115381 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115404 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115427 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115459 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115481 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115502 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115524 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115547 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115606 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115630 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115655 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115685 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115716 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115746 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115791 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115827 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115856 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115884 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115912 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115943 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.115972 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116015 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116054 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116085 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116114 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116146 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116178 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116209 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116238 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116267 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116295 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116327 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116359 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116388 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116440 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116511 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116555 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116622 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116654 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116723 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116745 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116767 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116786 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116820 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116852 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116882 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116910 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.116976 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117004 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117025 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117046 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117067 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117090 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117111 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117132 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117192 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117222 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117243 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117265 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117286 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117306 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117326 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117347 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117395 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117418 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117439 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117463 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117488 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117515 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117545 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117605 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117661 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117689 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117717 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117746 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117777 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117807 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117847 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117876 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117931 4786 reconstruct.go:97] "Volume reconstruction finished" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.117950 4786 reconciler.go:26] "Reconciler: start to sync state" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.118794 4786 manager.go:324] Recovery completed Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.132886 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.134768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.134821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.134843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.135946 4786 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.135978 4786 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.136011 4786 state_mem.go:36] "Initialized new in-memory state store" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.143290 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.146091 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.146153 4786 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.146195 4786 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.146252 4786 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.147086 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.147147 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.153901 4786 policy_none.go:49] "None policy: Start" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.154870 4786 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.154963 4786 state_mem.go:35] "Initializing new in-memory state store" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.188754 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.210119 4786 manager.go:334] "Starting Device Plugin manager" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.211361 4786 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.211420 4786 server.go:79] "Starting device plugin registration server" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.212076 4786 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.212102 4786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.212961 4786 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.213070 4786 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.213080 4786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.220358 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.246690 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.246894 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.248393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.248441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.248454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.248691 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249039 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249112 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.249661 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250068 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250313 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250399 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250523 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.250560 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251370 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251642 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.251715 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252779 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252821 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.252979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.255061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.255104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.255120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.290176 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.312891 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.316507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.316613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.316641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.316878 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.318344 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320203 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320330 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320356 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320486 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320524 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320596 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320627 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320654 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320680 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320703 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320727 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320750 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320809 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.320857 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422622 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422722 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422772 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422811 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422922 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422959 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422969 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422998 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423002 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.422963 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423030 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423061 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423073 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423155 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423174 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423109 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423084 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423388 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423418 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423422 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423465 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423492 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423623 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.423667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.518668 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.520106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.520206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.520227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.520282 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.521377 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.594707 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.619511 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.627430 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.641888 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-08f8d91fd25cc8f07e19c35c19aabb55141abb0b6bec43d68d9654d07d813132 WatchSource:0}: Error finding container 08f8d91fd25cc8f07e19c35c19aabb55141abb0b6bec43d68d9654d07d813132: Status 404 returned error can't find the container with id 08f8d91fd25cc8f07e19c35c19aabb55141abb0b6bec43d68d9654d07d813132 Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.647181 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.652889 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c024e301f3f78de42c46eb529e877c56fbc8abe85369111927c8bff32da18fa5 WatchSource:0}: Error finding container c024e301f3f78de42c46eb529e877c56fbc8abe85369111927c8bff32da18fa5: Status 404 returned error can't find the container with id c024e301f3f78de42c46eb529e877c56fbc8abe85369111927c8bff32da18fa5 Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.654221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.660026 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-95c44ef500038d9acd854b77e966ba7ac8af1445732a82038a152065c4f84491 WatchSource:0}: Error finding container 95c44ef500038d9acd854b77e966ba7ac8af1445732a82038a152065c4f84491: Status 404 returned error can't find the container with id 95c44ef500038d9acd854b77e966ba7ac8af1445732a82038a152065c4f84491 Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.669416 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1ef5f429697c895f03fb7e9c40ddbe8e5c8f0403b3df71942296ed8bc1151e9f WatchSource:0}: Error finding container 1ef5f429697c895f03fb7e9c40ddbe8e5c8f0403b3df71942296ed8bc1151e9f: Status 404 returned error can't find the container with id 1ef5f429697c895f03fb7e9c40ddbe8e5c8f0403b3df71942296ed8bc1151e9f Jan 27 00:05:55 crc kubenswrapper[4786]: W0127 00:05:55.676032 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ffc5c95f3a53ccf50c29cac95853de3724e0e1c9444918aa3ec4fccc84d220a3 WatchSource:0}: Error finding container ffc5c95f3a53ccf50c29cac95853de3724e0e1c9444918aa3ec4fccc84d220a3: Status 404 returned error can't find the container with id ffc5c95f3a53ccf50c29cac95853de3724e0e1c9444918aa3ec4fccc84d220a3 Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.691075 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.922357 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.923835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.923861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.923869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:55 crc kubenswrapper[4786]: I0127 00:05:55.923889 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:05:55 crc kubenswrapper[4786]: E0127 00:05:55.924236 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.078863 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.084938 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:44:13.258759068 +0000 UTC Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.150897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ffc5c95f3a53ccf50c29cac95853de3724e0e1c9444918aa3ec4fccc84d220a3"} Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.152199 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1ef5f429697c895f03fb7e9c40ddbe8e5c8f0403b3df71942296ed8bc1151e9f"} Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.153836 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"95c44ef500038d9acd854b77e966ba7ac8af1445732a82038a152065c4f84491"} Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.155240 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c024e301f3f78de42c46eb529e877c56fbc8abe85369111927c8bff32da18fa5"} Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.156181 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08f8d91fd25cc8f07e19c35c19aabb55141abb0b6bec43d68d9654d07d813132"} Jan 27 00:05:56 crc kubenswrapper[4786]: W0127 00:05:56.247053 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.247150 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:56 crc kubenswrapper[4786]: W0127 00:05:56.361561 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.361682 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.510316 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Jan 27 00:05:56 crc kubenswrapper[4786]: W0127 00:05:56.558130 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.558231 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:56 crc kubenswrapper[4786]: W0127 00:05:56.612675 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.612796 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.724519 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.725967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.726014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.726026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:56 crc kubenswrapper[4786]: I0127 00:05:56.726056 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:05:56 crc kubenswrapper[4786]: E0127 00:05:56.726582 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.080855 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.085144 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:23:03.067429407 +0000 UTC Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.107503 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 00:05:57 crc kubenswrapper[4786]: E0127 00:05:57.108767 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.161624 4786 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4" exitCode=0 Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.161704 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.161791 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.162998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.163034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.163047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.163844 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.163904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.166117 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c" exitCode=0 Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.166217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.166260 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.167190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.167215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.167226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.168872 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.169456 4786 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f" exitCode=0 Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.169514 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.169673 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.170532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.171927 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9c3c0b1e213899a73a2a030eaeb4eed5065cb76d5cd5e191a48c857f447c60f3" exitCode=0 Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.171958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9c3c0b1e213899a73a2a030eaeb4eed5065cb76d5cd5e191a48c857f447c60f3"} Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.172019 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.172754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.172830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:57 crc kubenswrapper[4786]: I0127 00:05:57.172856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: W0127 00:05:58.028236 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:58 crc kubenswrapper[4786]: E0127 00:05:58.028670 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.079407 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:58 crc kubenswrapper[4786]: W0127 00:05:58.080779 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Jan 27 00:05:58 crc kubenswrapper[4786]: E0127 00:05:58.080840 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.090182 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:57:08.712840394 +0000 UTC Jan 27 00:05:58 crc kubenswrapper[4786]: E0127 00:05:58.111656 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.175563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.175639 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.175738 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.176839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.176866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.176876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.179879 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.179907 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.179920 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.179931 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.179945 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.180042 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.180861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.180886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.180897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.182626 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.182706 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.183425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.183449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.183459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.185598 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e43a919593dac7a793cf7aabf9c6586842d9621129b6a17b8a98a56e61b72f48" exitCode=0 Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.185717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e43a919593dac7a793cf7aabf9c6586842d9621129b6a17b8a98a56e61b72f48"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.185727 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.186950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.186980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.186991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.192144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.192196 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.192210 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5"} Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.192305 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.193754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.193786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.193797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: E0127 00:05:58.293254 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6dbb5c02a038 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 00:05:55.073171512 +0000 UTC m=+0.556858565,LastTimestamp:2026-01-27 00:05:55.073171512 +0000 UTC m=+0.556858565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.328804 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.329998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.330030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.330041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.330062 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:05:58 crc kubenswrapper[4786]: E0127 00:05:58.330730 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.483104 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:05:58 crc kubenswrapper[4786]: I0127 00:05:58.835534 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.090329 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:54:01.57096911 +0000 UTC Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196657 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1740c0b205f591af52b4983e5902a52e3b9ae62878c2f198e8a4190e96347e5a" exitCode=0 Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196744 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1740c0b205f591af52b4983e5902a52e3b9ae62878c2f198e8a4190e96347e5a"} Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196811 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196851 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196905 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196918 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196959 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.196970 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.197078 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.201824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202423 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.202510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.673285 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:05:59 crc kubenswrapper[4786]: I0127 00:05:59.952313 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.090681 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:12:54.127654541 +0000 UTC Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.204161 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6fe2cc999d105b76aa0cbfcbfa461d3436afc957b3a5c7f154932428ce0404d7"} Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.204252 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.204444 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.204255 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7a1b799985cf2ce342fb6963e06c9223736c131379415b2082bb0e8e86d00a77"} Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.205079 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f96cb72b6836a1238b6e49b71238662af99342805c43050a01a342abb6ab25d"} Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.205660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.205694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.205706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.206727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.206754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:00 crc kubenswrapper[4786]: I0127 00:06:00.206764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.090841 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:52:46.722982773 +0000 UTC Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.214005 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5afa36654001a9c461b8ac98fce3ad6c3eb16c6bf3f8154c8bedfb0702f6646d"} Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.214082 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5afa0fb1d71f298389fc160f89f4c8b92090a0fd54f994513884a04c5e09f65"} Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.214094 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.214186 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.214275 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215388 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.215996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.216008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.437516 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.531447 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.532883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.532935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.532959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:01 crc kubenswrapper[4786]: I0127 00:06:01.532986 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.091848 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:55:46.747072801 +0000 UTC Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.093079 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.093289 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.095081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.095146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.095163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.216474 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.217826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.217888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.217909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.953069 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 00:06:02 crc kubenswrapper[4786]: I0127 00:06:02.953615 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.092115 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:16:56.761877686 +0000 UTC Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.963095 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.963338 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.965130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.965179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:03 crc kubenswrapper[4786]: I0127 00:06:03.965193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.092363 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:37:35.034502872 +0000 UTC Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.970758 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.970958 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.972589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.972669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.972689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:04 crc kubenswrapper[4786]: I0127 00:06:04.980369 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:05 crc kubenswrapper[4786]: I0127 00:06:05.092997 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:28:08.031422236 +0000 UTC Jan 27 00:06:05 crc kubenswrapper[4786]: E0127 00:06:05.220710 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 00:06:05 crc kubenswrapper[4786]: I0127 00:06:05.223656 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:05 crc kubenswrapper[4786]: I0127 00:06:05.225203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:05 crc kubenswrapper[4786]: I0127 00:06:05.225371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:05 crc kubenswrapper[4786]: I0127 00:06:05.225493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:06 crc kubenswrapper[4786]: I0127 00:06:06.093530 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:41:05.709726582 +0000 UTC Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.095537 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:59:19.900955302 +0000 UTC Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.488534 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.488782 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.490152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.490203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.490221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:07 crc kubenswrapper[4786]: I0127 00:06:07.493669 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.095991 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:42:28.142190799 +0000 UTC Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.230359 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.231204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.231268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.231290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:08 crc kubenswrapper[4786]: W0127 00:06:08.862643 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 00:06:08 crc kubenswrapper[4786]: I0127 00:06:08.862775 4786 trace.go:236] Trace[720263407]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 00:05:58.860) (total time: 10001ms): Jan 27 00:06:08 crc kubenswrapper[4786]: Trace[720263407]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:08.862) Jan 27 00:06:08 crc kubenswrapper[4786]: Trace[720263407]: [10.00198461s] [10.00198461s] END Jan 27 00:06:08 crc kubenswrapper[4786]: E0127 00:06:08.862813 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 00:06:09 crc kubenswrapper[4786]: W0127 00:06:09.002360 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.002452 4786 trace.go:236] Trace[1598876193]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 00:05:59.001) (total time: 10001ms): Jan 27 00:06:09 crc kubenswrapper[4786]: Trace[1598876193]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:09.002) Jan 27 00:06:09 crc kubenswrapper[4786]: Trace[1598876193]: [10.00123087s] [10.00123087s] END Jan 27 00:06:09 crc kubenswrapper[4786]: E0127 00:06:09.002477 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.079689 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.096851 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:42:08.262408422 +0000 UTC Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.673997 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.674077 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.856731 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 00:06:09 crc kubenswrapper[4786]: I0127 00:06:09.856815 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.097894 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:15:44.107275624 +0000 UTC Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.223589 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.223760 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.224518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.224552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.224579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.281398 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.281586 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.282547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.282609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.282625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:10 crc kubenswrapper[4786]: I0127 00:06:10.299556 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 00:06:11 crc kubenswrapper[4786]: I0127 00:06:11.097993 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:51:26.040736088 +0000 UTC Jan 27 00:06:11 crc kubenswrapper[4786]: I0127 00:06:11.235765 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:11 crc kubenswrapper[4786]: I0127 00:06:11.236537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:11 crc kubenswrapper[4786]: I0127 00:06:11.236597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:11 crc kubenswrapper[4786]: I0127 00:06:11.236614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:12 crc kubenswrapper[4786]: I0127 00:06:12.099111 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:42:36.408749467 +0000 UTC Jan 27 00:06:12 crc kubenswrapper[4786]: I0127 00:06:12.953942 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 00:06:12 crc kubenswrapper[4786]: I0127 00:06:12.954022 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 00:06:13 crc kubenswrapper[4786]: I0127 00:06:13.099777 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:07:02.933234032 +0000 UTC Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.101306 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:09:24.024176279 +0000 UTC Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.313547 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.678350 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.678616 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.679832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.679866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.679875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.682855 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.840735 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:14 crc kubenswrapper[4786]: E0127 00:06:14.862798 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 00:06:14 crc kubenswrapper[4786]: E0127 00:06:14.866115 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.870747 4786 trace.go:236] Trace[349313315]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 00:06:01.573) (total time: 13296ms): Jan 27 00:06:14 crc kubenswrapper[4786]: Trace[349313315]: ---"Objects listed" error: 13296ms (00:06:14.870) Jan 27 00:06:14 crc kubenswrapper[4786]: Trace[349313315]: [13.296795576s] [13.296795576s] END Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.870789 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.871403 4786 trace.go:236] Trace[331836017]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 00:06:04.332) (total time: 10538ms): Jan 27 00:06:14 crc kubenswrapper[4786]: Trace[331836017]: ---"Objects listed" error: 10538ms (00:06:14.870) Jan 27 00:06:14 crc kubenswrapper[4786]: Trace[331836017]: [10.538833343s] [10.538833343s] END Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.870765 4786 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.871432 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.882151 4786 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.902340 4786 csr.go:261] certificate signing request csr-r5jtc is approved, waiting to be issued Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.909874 4786 csr.go:257] certificate signing request csr-r5jtc is issued Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.910616 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35128->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.910662 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35128->192.168.126.11:17697: read: connection reset by peer" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.910865 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47928->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.911035 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47928->192.168.126.11:17697: read: connection reset by peer" Jan 27 00:06:14 crc kubenswrapper[4786]: I0127 00:06:14.933155 4786 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 00:06:14 crc kubenswrapper[4786]: W0127 00:06:14.933316 4786 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 00:06:14 crc kubenswrapper[4786]: W0127 00:06:14.933352 4786 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 00:06:14 crc kubenswrapper[4786]: W0127 00:06:14.933356 4786 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 00:06:14 crc kubenswrapper[4786]: E0127 00:06:14.933319 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.188e6dbb5faf8417\": read tcp 38.102.83.169:55692->38.102.83.169:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.188e6dbb5faf8417 default 26177 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 00:05:55 +0000 UTC,LastTimestamp:2026-01-27 00:05:55.25031127 +0000 UTC m=+0.733998313,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 00:06:14 crc kubenswrapper[4786]: W0127 00:06:14.933417 4786 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.071496 4786 apiserver.go:52] "Watching apiserver" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.080600 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.080926 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.081421 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.081515 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.081431 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.081977 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.082113 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.082176 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.082362 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.082463 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.082673 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.084542 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.085545 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.085742 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.085759 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.085914 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.087289 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.087467 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.087597 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.087816 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.090206 4786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.101967 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:09:42.07167767 +0000 UTC Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.128044 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.142981 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.157758 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172252 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172523 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172591 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172618 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172638 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172659 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172680 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172700 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172721 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172763 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172787 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172832 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172895 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.172904 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173058 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173093 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173120 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173144 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173200 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173228 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173261 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173293 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173320 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173347 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173375 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173399 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173432 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173438 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173487 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173538 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173582 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173611 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173637 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173658 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173663 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173714 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173742 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173770 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173794 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173818 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173846 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173873 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173891 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173895 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173949 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.173979 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174006 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174036 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174064 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174072 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174116 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174149 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174270 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174331 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174420 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174461 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174717 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.174742 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175124 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175193 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175251 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175338 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175412 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175442 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175479 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175502 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175650 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.175738 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176065 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176141 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176268 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176454 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176679 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.176775 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:15.676754462 +0000 UTC m=+21.160441505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176892 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176786 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176826 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176846 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.176935 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177136 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177195 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177224 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177245 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177271 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177294 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177314 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177394 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177448 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177555 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177766 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177771 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.178045 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.178191 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.178286 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179513 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179621 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179743 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179777 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179813 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179850 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179885 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.179913 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.180384 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.180633 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.180847 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181147 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181171 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181214 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181227 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181208 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181243 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181380 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177903 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181402 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181414 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181500 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181547 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181598 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181624 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181660 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181686 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181706 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181746 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181779 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181837 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181863 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181890 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181920 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181947 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181977 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182007 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182031 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182062 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182094 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182124 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.177948 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184894 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.185086 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181826 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181879 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.181925 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182039 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182160 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182391 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182651 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182787 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182924 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.182905 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.183383 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.183502 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.183916 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.183918 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.185220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.183939 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184045 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184168 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184185 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.185779 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184271 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.184866 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.186149 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.187354 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.187387 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.187773 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.187788 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.187391 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.188274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.188416 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.188918 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.189262 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.190115 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.191294 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.191509 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.191780 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.191970 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192263 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192404 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192666 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.190357 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192907 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.193674 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.193834 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.193987 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192341 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192348 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.192406 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.191737 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.193924 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.193945 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194376 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194409 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194558 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194698 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194721 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194787 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194830 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194853 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194874 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194894 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194915 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194934 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194955 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194977 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.194998 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195020 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195075 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195096 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195115 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195137 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195159 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195177 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195196 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195235 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195244 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195255 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195268 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195279 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195319 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195332 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195398 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195469 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195491 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195512 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195530 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195531 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195549 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195592 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195614 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195636 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195707 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195724 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195734 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195778 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195837 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195847 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195865 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195892 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195920 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.195946 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196021 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196048 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196080 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196112 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196138 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196164 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196190 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196217 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196245 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196325 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196357 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196384 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196410 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196437 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196464 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196489 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196525 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196554 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196622 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196650 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196677 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196705 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196732 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196756 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196784 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196820 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196846 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196874 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196898 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196924 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196948 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196972 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196996 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197028 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197054 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197083 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197108 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197135 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197453 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197487 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197516 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197544 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197591 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197622 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197648 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197676 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197728 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197759 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197786 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198052 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198082 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198116 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198142 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198194 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198244 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198346 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198463 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198482 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198496 4786 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198510 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198537 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198560 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198594 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198610 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198624 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198638 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198651 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198664 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198678 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198712 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198726 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198755 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198769 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198783 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198796 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198809 4786 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198822 4786 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198835 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198848 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198862 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198876 4786 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198930 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198948 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198963 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198977 4786 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198991 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199005 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199018 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199046 4786 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199061 4786 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199076 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199091 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199106 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199121 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199135 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199148 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199161 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199173 4786 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199187 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199200 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199215 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199229 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199246 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199259 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199272 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199285 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199297 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199309 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199323 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199335 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199349 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199363 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199377 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199391 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199404 4786 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199417 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199430 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199444 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199457 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199470 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199484 4786 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199497 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199509 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199521 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199534 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199548 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199561 4786 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199593 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199606 4786 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199623 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199636 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199650 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199664 4786 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199680 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199697 4786 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199714 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199729 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199742 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199757 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199771 4786 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199784 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199796 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199811 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199825 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199838 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199858 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199872 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199886 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199898 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199911 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199925 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199937 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199950 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199963 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199976 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199988 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200001 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200014 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200028 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200040 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196079 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.202122 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.202138 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196204 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196355 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196685 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196823 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.196986 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197031 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197135 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197248 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204141 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197338 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197373 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197378 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197468 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197659 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197540 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.202140 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197766 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198722 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.198988 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199266 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199377 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199631 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.199796 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200353 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200622 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200761 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200769 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200693 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200800 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.200835 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201105 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201160 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201209 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201354 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201527 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201601 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201608 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.203675 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.201952 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204027 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204805 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204891 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204905 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204952 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.204994 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.197726 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.205159 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.205212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.202257 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.205375 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.205219 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.205491 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.203017 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206167 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206254 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206290 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206481 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206399 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206653 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.206586 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.206741 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.206808 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:15.706788874 +0000 UTC m=+21.190475917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.207152 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.207235 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.207306 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.207365 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.207451 4786 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.207809 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.207893 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:15.707870275 +0000 UTC m=+21.191557558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.208378 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.208383 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.208493 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.203059 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.210533 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.210800 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.221845 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.223929 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.223944 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.224034 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.224145 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.224200 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.224253 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.224284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.226258 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.227716 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229253 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229275 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229289 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229340 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:15.72932397 +0000 UTC m=+21.213011013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229429 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229440 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229447 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.229472 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:15.729465064 +0000 UTC m=+21.213152317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.233080 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.233330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.235827 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.236140 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.236215 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.236852 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.236967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.237019 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.236648 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.239586 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.239709 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.240657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.240562 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.246492 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.259546 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.269752 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.272265 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.274084 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f" exitCode=255 Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.274126 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f"} Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.274549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.286732 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.289477 4786 scope.go:117] "RemoveContainer" containerID="d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.290208 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.297622 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300333 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300383 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300428 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300440 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300450 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300460 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300471 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300480 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300489 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300499 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300508 4786 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300518 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300526 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300525 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300534 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300605 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300632 4786 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300667 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300689 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300702 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300715 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300727 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300740 4786 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300755 4786 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300768 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300780 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300793 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300805 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300816 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300827 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300839 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300851 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300874 4786 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300887 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300899 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300911 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300923 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300935 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300947 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300959 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300975 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300986 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.300999 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301013 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301025 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301038 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301050 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301063 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301075 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301087 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301100 4786 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301113 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301125 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301137 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301149 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301160 4786 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301172 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301183 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301196 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301207 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301218 4786 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301230 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301241 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301254 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301267 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301278 4786 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301290 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301302 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301314 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301326 4786 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301339 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301352 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301363 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301375 4786 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301386 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301398 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301409 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301421 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301433 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301444 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301455 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301466 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301477 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301489 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301499 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301511 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301522 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301534 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301547 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301559 4786 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301590 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301602 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301613 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.301624 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.309046 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.319964 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.332040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.346117 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.355785 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.372280 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.383531 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.396607 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.406145 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.409553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.422500 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.427092 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 00:06:15 crc kubenswrapper[4786]: W0127 00:06:15.429123 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b693ad03dd095008fe691eee3a648c1bcd6934c3144cb5960e1d73b6816c6cbf WatchSource:0}: Error finding container b693ad03dd095008fe691eee3a648c1bcd6934c3144cb5960e1d73b6816c6cbf: Status 404 returned error can't find the container with id b693ad03dd095008fe691eee3a648c1bcd6934c3144cb5960e1d73b6816c6cbf Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.704717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.704912 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:16.704892723 +0000 UTC m=+22.188579766 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.805986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.806024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.806053 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.806070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806091 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806161 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806183 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:16.806163468 +0000 UTC m=+22.289850511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806261 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:16.80622547 +0000 UTC m=+22.289912503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806343 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806356 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806366 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806399 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806408 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806417 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806443 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:16.806435466 +0000 UTC m=+22.290122509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: E0127 00:06:15.806482 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:16.806475977 +0000 UTC m=+22.290163020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.911341 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 00:01:14 +0000 UTC, rotation deadline is 2026-10-19 05:58:43.960201745 +0000 UTC Jan 27 00:06:15 crc kubenswrapper[4786]: I0127 00:06:15.911437 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6365h52m28.048768197s for next certificate rotation Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.102772 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:03:50.230891824 +0000 UTC Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.277293 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cbdece3e16a2b42956f766c8f837ffc5a9f8055da68b2952f471705299f55824"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.279664 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.279724 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.279737 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8c97791a3938deab179ffc1f9400e211a2118213160c09543325ff1491f7688f"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.281221 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.281262 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b693ad03dd095008fe691eee3a648c1bcd6934c3144cb5960e1d73b6816c6cbf"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.283224 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.285144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b"} Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.285491 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.313191 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xs757"] Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.313549 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.315072 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.315652 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.315844 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.316084 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.319399 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jpkpx"] Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.319906 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.321734 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.322124 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.322951 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.323536 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.339281 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.358604 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.376274 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.390947 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.410359 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j288p\" (UniqueName: \"kubernetes.io/projected/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-kube-api-access-j288p\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.410419 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34352a54-9248-486d-9020-721918f77f3a-host\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.410446 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwg8z\" (UniqueName: \"kubernetes.io/projected/34352a54-9248-486d-9020-721918f77f3a-kube-api-access-mwg8z\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.410483 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34352a54-9248-486d-9020-721918f77f3a-serviceca\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.410528 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-hosts-file\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.415051 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.485178 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.506080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j288p\" (UniqueName: \"kubernetes.io/projected/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-kube-api-access-j288p\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34352a54-9248-486d-9020-721918f77f3a-host\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511362 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwg8z\" (UniqueName: \"kubernetes.io/projected/34352a54-9248-486d-9020-721918f77f3a-kube-api-access-mwg8z\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34352a54-9248-486d-9020-721918f77f3a-serviceca\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511392 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-hosts-file\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511465 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-hosts-file\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.511503 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34352a54-9248-486d-9020-721918f77f3a-host\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.513493 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34352a54-9248-486d-9020-721918f77f3a-serviceca\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.532326 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.532786 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j288p\" (UniqueName: \"kubernetes.io/projected/5cfc6ef0-977d-4767-bb4b-ba841a34acb6-kube-api-access-j288p\") pod \"node-resolver-jpkpx\" (UID: \"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\") " pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.534462 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwg8z\" (UniqueName: \"kubernetes.io/projected/34352a54-9248-486d-9020-721918f77f3a-kube-api-access-mwg8z\") pod \"node-ca-xs757\" (UID: \"34352a54-9248-486d-9020-721918f77f3a\") " pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.550210 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.562930 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.573539 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.632265 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xs757" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.632911 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.645063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpkpx" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.656493 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.669311 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.685750 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:16Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.733032 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.733620 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:18.733591404 +0000 UTC m=+24.217278447 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.833997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.834357 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.834389 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834172 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: I0127 00:06:16.834416 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834443 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834464 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834496 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834506 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834519 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:18.834500059 +0000 UTC m=+24.318187102 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834512 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834546 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834552 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:18.83453943 +0000 UTC m=+24.318226473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834590 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:18.834581062 +0000 UTC m=+24.318268105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834610 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:16 crc kubenswrapper[4786]: E0127 00:06:16.834709 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:18.834685765 +0000 UTC m=+24.318372798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.103601 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:27:12.915126955 +0000 UTC Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.114132 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-87nzd"] Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.114495 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-phvd5"] Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.114669 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.114738 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.117000 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ntg2b"] Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.117441 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.117534 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.117882 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.117978 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.118332 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.118387 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.118689 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.118884 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.119033 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.120180 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.120327 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.120444 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.120477 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.120892 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fqh9p"] Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.121821 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.123382 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.123915 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.124024 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.124775 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.125062 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.125320 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.127209 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.135261 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137552 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137624 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137659 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137681 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137702 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cnibin\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjhj\" (UniqueName: \"kubernetes.io/projected/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-kube-api-access-vwjhj\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-multus-certs\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137819 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137836 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjgj\" (UniqueName: \"kubernetes.io/projected/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-kube-api-access-4mjgj\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137863 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-system-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137878 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-k8s-cni-cncf-io\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137892 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-etc-kubernetes\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137908 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137924 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-multus-daemon-config\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137940 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-multus\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137954 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137969 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-bin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137983 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.137999 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-system-cni-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138013 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138048 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-os-release\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138074 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-conf-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138147 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138253 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138274 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138299 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-cni-binary-copy\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138322 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-netns\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138343 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnx4z\" (UniqueName: \"kubernetes.io/projected/8d790bab-fb2b-4745-a195-65359a962f52-kube-api-access-gnx4z\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138365 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138388 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138430 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-proxy-tls\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138454 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138475 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-cnibin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138514 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbcd\" (UniqueName: \"kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138541 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138561 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138596 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-rootfs\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138616 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-os-release\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138660 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138681 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-socket-dir-parent\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138709 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138733 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-kubelet\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.138751 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-hostroot\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.147303 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.147461 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.147503 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.147510 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:17 crc kubenswrapper[4786]: E0127 00:06:17.147602 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:17 crc kubenswrapper[4786]: E0127 00:06:17.147690 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:17 crc kubenswrapper[4786]: E0127 00:06:17.147799 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.152007 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.152591 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.153863 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.154459 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.155493 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.155991 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.156639 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.157600 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.158218 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.159107 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.159585 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.160616 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.160727 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.161083 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.161604 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.162450 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.163053 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.163923 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.164299 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.164883 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.165843 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.166286 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.167254 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.167671 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.168633 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.169010 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.169562 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.170555 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.171107 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.172033 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.172482 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.173449 4786 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.173548 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.173914 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.175181 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.178923 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.179381 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.181704 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.183402 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.184173 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.185677 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.186708 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.187917 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.189171 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.190703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.190996 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.192341 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.193016 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.193792 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.195133 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.196253 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.197471 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.198175 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.199431 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.200163 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.200814 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.201792 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.203823 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.226461 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239336 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-system-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239371 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-k8s-cni-cncf-io\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239385 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-etc-kubernetes\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239401 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-multus-daemon-config\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239416 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239436 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-multus\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239452 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-bin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239482 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239489 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-k8s-cni-cncf-io\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239513 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-etc-kubernetes\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239539 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-system-cni-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239500 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-system-cni-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239559 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-system-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239617 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239613 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239656 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-os-release\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-conf-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239685 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-multus\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-cni-bin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239729 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239736 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239751 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239776 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239796 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239816 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-netns\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239839 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnx4z\" (UniqueName: \"kubernetes.io/projected/8d790bab-fb2b-4745-a195-65359a962f52-kube-api-access-gnx4z\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239861 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-cni-binary-copy\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239880 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239907 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239929 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239951 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-proxy-tls\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239976 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240017 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-cnibin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbcd\" (UniqueName: \"kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240139 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-os-release\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240154 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-rootfs\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240168 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240182 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240197 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-socket-dir-parent\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240214 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-kubelet\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-hostroot\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240262 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240308 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cnibin\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240336 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240351 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-multus-certs\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240387 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjhj\" (UniqueName: \"kubernetes.io/projected/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-kube-api-access-vwjhj\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240406 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240420 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjgj\" (UniqueName: \"kubernetes.io/projected/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-kube-api-access-4mjgj\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240466 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-multus-daemon-config\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240527 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240548 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240581 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-hostroot\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240585 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-conf-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240622 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240652 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240768 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-cni-dir\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240799 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-multus-certs\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240891 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.239908 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-os-release\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.240971 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-os-release\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241005 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-rootfs\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241045 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241066 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241119 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-multus-socket-dir-parent\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241127 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241143 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241188 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-var-lib-kubelet\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241210 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241233 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241302 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241527 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241601 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-host-run-netns\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241719 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241787 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241830 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241858 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-cnibin\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241889 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8d790bab-fb2b-4745-a195-65359a962f52-cnibin\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241911 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.241977 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.242183 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.242459 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8d790bab-fb2b-4745-a195-65359a962f52-cni-binary-copy\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.242623 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.244972 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-proxy-tls\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.246588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.256178 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.263091 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnx4z\" (UniqueName: \"kubernetes.io/projected/8d790bab-fb2b-4745-a195-65359a962f52-kube-api-access-gnx4z\") pod \"multus-phvd5\" (UID: \"8d790bab-fb2b-4745-a195-65359a962f52\") " pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.263367 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbcd\" (UniqueName: \"kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd\") pod \"ovnkube-node-fqh9p\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.268384 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjgj\" (UniqueName: \"kubernetes.io/projected/bcd24fc4-5ad4-4080-aa07-55552ab1e5e6-kube-api-access-4mjgj\") pod \"machine-config-daemon-87nzd\" (UID: \"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\") " pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.273106 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.278538 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjhj\" (UniqueName: \"kubernetes.io/projected/4bf39ee3-976b-4ecb-b200-1d4a790b67ff-kube-api-access-vwjhj\") pod \"multus-additional-cni-plugins-ntg2b\" (UID: \"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\") " pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.283409 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.288776 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpkpx" event={"ID":"5cfc6ef0-977d-4767-bb4b-ba841a34acb6","Type":"ContainerStarted","Data":"67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f"} Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.288930 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpkpx" event={"ID":"5cfc6ef0-977d-4767-bb4b-ba841a34acb6","Type":"ContainerStarted","Data":"dc987642e28172031601c8eec124ebc1a409dbc0fd7f94ec80db0e6fc7d10dcd"} Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.290136 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xs757" event={"ID":"34352a54-9248-486d-9020-721918f77f3a","Type":"ContainerStarted","Data":"ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121"} Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.290174 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xs757" event={"ID":"34352a54-9248-486d-9020-721918f77f3a","Type":"ContainerStarted","Data":"b2edd130cba3ff7d8870ac5f98835c6a72ae921921f42122da843afcbe15674e"} Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.296210 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.307946 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.324911 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.359751 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.381175 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.399022 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.412532 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.423661 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.431930 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-phvd5" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.437427 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: W0127 00:06:17.442008 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d790bab_fb2b_4745_a195_65359a962f52.slice/crio-8388dd88bebc0067a0e5f2361bfba481bd506d5e5c8e6de3994b75c2163ad0c9 WatchSource:0}: Error finding container 8388dd88bebc0067a0e5f2361bfba481bd506d5e5c8e6de3994b75c2163ad0c9: Status 404 returned error can't find the container with id 8388dd88bebc0067a0e5f2361bfba481bd506d5e5c8e6de3994b75c2163ad0c9 Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.442449 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.454841 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.456531 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.466196 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.476957 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.487797 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.500602 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.512460 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.522041 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.537709 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.553797 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.568610 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.583169 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: W0127 00:06:17.589993 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629f8cf2_3b6f_404b_814f_1e613f80e63e.slice/crio-53c9b258c899a3126cd3056c657af284d6115e0d7a735e291b783da7daa2f830 WatchSource:0}: Error finding container 53c9b258c899a3126cd3056c657af284d6115e0d7a735e291b783da7daa2f830: Status 404 returned error can't find the container with id 53c9b258c899a3126cd3056c657af284d6115e0d7a735e291b783da7daa2f830 Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.598936 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.614545 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.636227 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.658805 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.672562 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:17 crc kubenswrapper[4786]: I0127 00:06:17.687031 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:17Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.104649 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:21:29.120647561 +0000 UTC Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.295338 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerStarted","Data":"1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.295387 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerStarted","Data":"8388dd88bebc0067a0e5f2361bfba481bd506d5e5c8e6de3994b75c2163ad0c9"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.297529 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c" exitCode=0 Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.297617 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.297666 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerStarted","Data":"81db55e1c7c55d2356024d64b667e6a762ad322eb63cbe3fd3bcf166aff6a6bc"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.299818 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.299854 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.299868 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"4ce9fe5883ac40552719b695255c421cab57887023fe3fd385b62729b6d17bce"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.301400 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.303170 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" exitCode=0 Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.303264 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.303329 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"53c9b258c899a3126cd3056c657af284d6115e0d7a735e291b783da7daa2f830"} Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.313179 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.329864 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.347432 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.359075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.376460 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.388917 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.518552 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.566000 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.587785 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.616258 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.641348 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.655853 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.672714 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.682836 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.696390 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.709640 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.726347 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.744595 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.760290 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.776198 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.788464 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.802775 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.815750 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.815979 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:22.815960182 +0000 UTC m=+28.299647225 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.818643 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.830518 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.843171 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.860041 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:18Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.916604 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.916652 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.916675 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:18 crc kubenswrapper[4786]: I0127 00:06:18.916697 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916792 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916808 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916850 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916859 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916872 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:22.916853397 +0000 UTC m=+28.400540500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916903 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:22.916890658 +0000 UTC m=+28.400577691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916925 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916949 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916956 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:22.91694722 +0000 UTC m=+28.400634373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916959 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.916967 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:18 crc kubenswrapper[4786]: E0127 00:06:18.917013 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:22.916987411 +0000 UTC m=+28.400674454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.105871 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:54:05.445619631 +0000 UTC Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.149612 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:19 crc kubenswrapper[4786]: E0127 00:06:19.149735 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.149803 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:19 crc kubenswrapper[4786]: E0127 00:06:19.149858 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.149900 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:19 crc kubenswrapper[4786]: E0127 00:06:19.149951 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.317374 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce" exitCode=0 Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.317582 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce"} Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.328281 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.328351 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.328368 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.338344 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.352992 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.368898 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.383931 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.394992 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.405426 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.425245 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.443327 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.460403 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.474601 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.491493 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.509145 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.531427 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.958709 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.965414 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.971813 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.972848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.986779 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:19 crc kubenswrapper[4786]: I0127 00:06:19.998195 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:19Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.010283 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.024555 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.043048 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.067095 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.086168 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.101897 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.106796 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:10:17.758610625 +0000 UTC Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.119020 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.136001 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.154914 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.182105 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.202157 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.223383 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.240236 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.258562 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.272184 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.280820 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.291187 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.302429 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.320544 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.332833 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f" exitCode=0 Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.332893 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f"} Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.337077 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.337111 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.337128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.337660 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.355446 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.371555 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.384154 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.398255 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.416835 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.430043 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.442289 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.457965 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.477468 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.492697 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.509675 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.533586 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.545381 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.556149 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.567521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.583455 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.600372 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:20 crc kubenswrapper[4786]: I0127 00:06:20.612748 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:20Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.107276 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:46:08.397036926 +0000 UTC Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.146587 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.146628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.146594 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.146717 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.146765 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.146837 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.266817 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.268323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.268352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.268361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.268451 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.276002 4786 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.276340 4786 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.277637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.277686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.277700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.277717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.277732 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.292005 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.296198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.296248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.296257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.296270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.296280 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.308739 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.313803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.313856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.313873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.313934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.313953 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.332579 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.336223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.336254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.336262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.336275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.336284 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.342495 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1" exitCode=0 Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.342604 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1"} Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.352890 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.355461 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.361206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.361253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.361266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.361285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.361297 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.371073 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.372947 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: E0127 00:06:21.373102 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.374875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.374911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.374921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.374936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.374945 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.387390 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.400591 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.410899 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.423186 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.432324 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.442730 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.453613 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.468892 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.477491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.477524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.477534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.477615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.477650 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.486214 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.497179 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.509787 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.522359 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.580295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.580335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.580346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.580362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.580374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.595240 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.682311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.682339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.682350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.682364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.682374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.784343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.784375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.784383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.784398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.784408 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.887290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.887335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.887347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.887364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.887376 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.990658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.990744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.990758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.990784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:21 crc kubenswrapper[4786]: I0127 00:06:21.990800 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:21Z","lastTransitionTime":"2026-01-27T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.092738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.093175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.093278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.093386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.093487 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.108474 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:56:44.565710365 +0000 UTC Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.196451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.196496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.196507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.196525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.196539 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.298527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.298636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.298675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.298713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.298737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.350562 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.353606 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483" exitCode=0 Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.353650 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.372117 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.384507 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.399658 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.403301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.403328 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.403339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.403357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.403370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.417189 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.430276 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.440407 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.452260 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.466414 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.477646 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.487183 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.498841 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.505214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.505243 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.505254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.505270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.505283 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.508144 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.519529 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.534049 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.607203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.607233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.607244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.607259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.607271 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.709142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.709180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.709194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.709213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.709229 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.811819 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.811866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.811879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.811895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.811907 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.858301 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.858473 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:30.858452253 +0000 UTC m=+36.342139296 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.913846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.913882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.913895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.913927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.913938 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:22Z","lastTransitionTime":"2026-01-27T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.958837 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.958879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.958901 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:22 crc kubenswrapper[4786]: I0127 00:06:22.958921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959020 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959035 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959045 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959086 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:30.959072639 +0000 UTC m=+36.442759682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959407 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959455 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:30.95944512 +0000 UTC m=+36.443132163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959496 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959523 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:30.959514032 +0000 UTC m=+36.443201075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959593 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959606 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959616 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:22 crc kubenswrapper[4786]: E0127 00:06:22.959641 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:30.959632685 +0000 UTC m=+36.443319728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.017152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.017214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.017237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.017270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.017296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.108622 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:04:13.089319806 +0000 UTC Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.119800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.119835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.119847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.119864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.119875 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.150235 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:23 crc kubenswrapper[4786]: E0127 00:06:23.150386 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.150696 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:23 crc kubenswrapper[4786]: E0127 00:06:23.150782 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.150847 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:23 crc kubenswrapper[4786]: E0127 00:06:23.150912 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.222189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.222219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.222231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.222244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.222253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.324062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.324107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.324119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.324136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.324148 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.361366 4786 generic.go:334] "Generic (PLEG): container finished" podID="4bf39ee3-976b-4ecb-b200-1d4a790b67ff" containerID="de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7" exitCode=0 Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.361438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerDied","Data":"de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.390191 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.403081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.419744 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.429169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.429215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.429231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.429251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.429263 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.431446 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.441114 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.451150 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.465758 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.476689 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.491605 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.507077 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.525993 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.550669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.550719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.550731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.550746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.550755 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.553860 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.573280 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.585548 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.653083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.653115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.653124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.653136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.653144 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.756153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.756211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.756227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.756245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.756257 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.859329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.859382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.859401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.859430 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.859453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.962376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.962452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.962479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.962510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:23 crc kubenswrapper[4786]: I0127 00:06:23.962533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:23Z","lastTransitionTime":"2026-01-27T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.064481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.064520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.064535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.064556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.064624 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.108993 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:04:05.441992128 +0000 UTC Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.167400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.167426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.167436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.167450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.167462 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.270207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.270258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.270275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.270298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.270319 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.357067 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371797 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.371983 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.372257 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.372324 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.377811 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" event={"ID":"4bf39ee3-976b-4ecb-b200-1d4a790b67ff","Type":"ContainerStarted","Data":"4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.387429 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.401171 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.416904 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.418919 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.421919 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.439555 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.456291 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.474885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.474953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.474971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.474998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.475019 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.478005 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.499013 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.515317 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.529511 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.546451 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.569622 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.577958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.577998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.578011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.578029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.578041 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.584759 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.595838 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.607546 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.624075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.641886 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.658775 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.675057 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.680497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.680758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.680912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.681043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.681179 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.696722 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.716150 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.730377 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.747521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.775121 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.783771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.783825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.783840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.783859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.783872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.790787 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.801858 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.813672 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.825015 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.841873 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.886255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.886289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.886297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.886309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.886318 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.988325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.988374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.988389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.988407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:24 crc kubenswrapper[4786]: I0127 00:06:24.988419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:24Z","lastTransitionTime":"2026-01-27T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.090681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.090731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.090748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.090773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.090792 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.109602 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:48:49.724935958 +0000 UTC Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.147129 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.147163 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.147130 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:25 crc kubenswrapper[4786]: E0127 00:06:25.147286 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:25 crc kubenswrapper[4786]: E0127 00:06:25.147458 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:25 crc kubenswrapper[4786]: E0127 00:06:25.147592 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.162374 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.174072 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.192867 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.193795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.193831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.193842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.193858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.193871 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.213744 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.247910 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.264059 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.277976 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.293483 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.295764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.295818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.295837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.295860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.295881 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.307267 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.323975 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.334813 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.345985 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.356224 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.367551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.380838 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.397936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.397965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.397977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.397992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.398004 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.500966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.501297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.501436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.501610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.501743 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.604917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.604962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.604976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.604999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.605036 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.678946 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.707290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.707686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.707823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.708039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.708250 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.811865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.811976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.811996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.812022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.812039 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.914923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.914957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.914965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.914980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:25 crc kubenswrapper[4786]: I0127 00:06:25.914988 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:25Z","lastTransitionTime":"2026-01-27T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.017539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.017607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.017623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.017641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.017656 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.110609 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:26:21.440925519 +0000 UTC Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.120330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.120363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.120377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.120413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.120426 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.227702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.227748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.227761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.227783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.227796 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.330501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.330556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.330590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.330608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.330617 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.383653 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.385131 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.434539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.434600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.434613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.434633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.434647 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.537822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.537879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.537897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.537921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.537939 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.642067 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.642097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.642108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.642123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.642135 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.745444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.745499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.745520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.745549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.745600 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.849020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.849092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.849120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.849153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.849173 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.952435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.952515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.952539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.953032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.953390 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:26Z","lastTransitionTime":"2026-01-27T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.963531 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:06:26 crc kubenswrapper[4786]: I0127 00:06:26.980270 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.000132 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:26Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.022246 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.052402 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.056960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.057044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.057063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.057088 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.057106 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.069787 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.086124 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.106883 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.110784 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:21:51.357969985 +0000 UTC Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.125827 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.145566 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.146618 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.146687 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.146701 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:27 crc kubenswrapper[4786]: E0127 00:06:27.146807 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:27 crc kubenswrapper[4786]: E0127 00:06:27.147115 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:27 crc kubenswrapper[4786]: E0127 00:06:27.147005 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.159817 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.159850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.159862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.159879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.159891 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.172232 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.191637 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.204255 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.221662 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.234475 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.263004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.263057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.263078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.263101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.263118 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.365757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.365835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.365857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.365888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.365913 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.389701 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/0.log" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.393888 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603" exitCode=1 Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.393938 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.394981 4786 scope.go:117] "RemoveContainer" containerID="7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.419173 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.444040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.464995 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.472447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.472511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.472539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.472613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.472641 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.481086 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.498990 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.513224 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.525891 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.558108 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.576268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.576308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.576320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.576339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.576353 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.590200 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.607958 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.627896 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.645552 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.670195 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.679200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.679234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.679247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.679266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.679281 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.683788 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.782876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.782928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.782948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.782972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.782990 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.886792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.886855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.886870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.886893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.886913 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.990142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.990196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.990206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.990225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:27 crc kubenswrapper[4786]: I0127 00:06:27.990239 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:27Z","lastTransitionTime":"2026-01-27T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.093481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.093546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.093563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.093613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.093631 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.111953 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:30:52.711027549 +0000 UTC Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.195695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.195741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.195751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.195767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.195777 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.298220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.298281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.298300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.298327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.298345 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400658 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/0.log" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.400845 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.404488 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.404709 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.430300 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.451414 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.467079 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.484227 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.503720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.503776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.503799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.503825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.503843 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.504262 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.520788 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.539427 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.555101 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.568199 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.581518 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.596799 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.606806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.607035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.607150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.607263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.607368 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.618702 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.649228 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.671201 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.710889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.710964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.710989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.711020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.711038 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.814934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.815008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.815027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.815050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.815066 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.924820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.924895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.924919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.924952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:28 crc kubenswrapper[4786]: I0127 00:06:28.924994 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:28Z","lastTransitionTime":"2026-01-27T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.028176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.028240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.028262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.028292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.028313 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.112329 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:30:19.23340526 +0000 UTC Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.132147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.132213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.132235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.132262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.132279 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.146819 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.146954 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.146980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:29 crc kubenswrapper[4786]: E0127 00:06:29.147529 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:29 crc kubenswrapper[4786]: E0127 00:06:29.148889 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:29 crc kubenswrapper[4786]: E0127 00:06:29.149150 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.235748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.235796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.235809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.235829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.235842 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.339261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.339347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.339375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.339408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.339432 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.410396 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/1.log" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.411272 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/0.log" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.415368 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2" exitCode=1 Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.415420 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.415539 4786 scope.go:117] "RemoveContainer" containerID="7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.417657 4786 scope.go:117] "RemoveContainer" containerID="16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2" Jan 27 00:06:29 crc kubenswrapper[4786]: E0127 00:06:29.417909 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.437737 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.441967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.442011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.442029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.442050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.442067 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.455386 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.475529 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.504554 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.526374 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545087 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.545213 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.559408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.578340 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.596466 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.614037 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.626670 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.639703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.647550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.647606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.647619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.647637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.647652 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.651474 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.664629 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.750967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.751020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.751036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.751059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.751077 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.854386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.854443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.854458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.854478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.854494 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.883542 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw"] Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.884195 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.887147 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.887660 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.911751 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.933178 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.936810 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.936870 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.936911 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzq68\" (UniqueName: \"kubernetes.io/projected/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-kube-api-access-dzq68\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.936990 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.953029 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.957759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.957836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.957863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.957896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.957916 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:29Z","lastTransitionTime":"2026-01-27T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.971319 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:29 crc kubenswrapper[4786]: I0127 00:06:29.992329 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.014888 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.035401 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.037953 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.038095 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.038133 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.039184 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzq68\" (UniqueName: \"kubernetes.io/projected/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-kube-api-access-dzq68\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.039098 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.039632 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.046397 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.056691 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.063739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.063786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.063799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.063825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.063840 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.075805 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzq68\" (UniqueName: \"kubernetes.io/projected/617f5804-7f6b-44cd-9f6b-2cffbd175ce2-kube-api-access-dzq68\") pod \"ovnkube-control-plane-749d76644c-tv5fw\" (UID: \"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.082177 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.108453 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.113185 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:42:47.948393583 +0000 UTC Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.125833 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.141120 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.152876 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.166537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.166613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.166626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.166645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.166657 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.168449 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.192171 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.211406 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" Jan 27 00:06:30 crc kubenswrapper[4786]: W0127 00:06:30.230837 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod617f5804_7f6b_44cd_9f6b_2cffbd175ce2.slice/crio-801fb232bd66ab01f42b7cc2bf31cfac631d4bb5acceae4a5771b73fc298603c WatchSource:0}: Error finding container 801fb232bd66ab01f42b7cc2bf31cfac631d4bb5acceae4a5771b73fc298603c: Status 404 returned error can't find the container with id 801fb232bd66ab01f42b7cc2bf31cfac631d4bb5acceae4a5771b73fc298603c Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.270923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.271075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.271094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.271123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.271143 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.374066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.374123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.374133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.374157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.374170 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.423285 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/1.log" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.429287 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" event={"ID":"617f5804-7f6b-44cd-9f6b-2cffbd175ce2","Type":"ContainerStarted","Data":"801fb232bd66ab01f42b7cc2bf31cfac631d4bb5acceae4a5771b73fc298603c"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.477432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.477487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.477504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.477526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.477540 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.580712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.580790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.580808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.580837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.580858 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.683740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.683801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.683817 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.683839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.683876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.787488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.787523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.787532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.787546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.787555 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.890126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.890187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.890208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.890238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.890257 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.950414 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:30 crc kubenswrapper[4786]: E0127 00:06:30.950744 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:06:46.950693421 +0000 UTC m=+52.434380504 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.993368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.993453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.993497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.993533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:30 crc kubenswrapper[4786]: I0127 00:06:30.993555 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:30Z","lastTransitionTime":"2026-01-27T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.060916 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.060965 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.060989 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.061079 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:47.061053377 +0000 UTC m=+52.544740460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.060727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.061630 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.061711 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.061785 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.061912 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.061953 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.061979 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.062054 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:47.062029725 +0000 UTC m=+52.545716818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.061976 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.062112 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.062159 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:47.062137288 +0000 UTC m=+52.545824371 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.062242 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:06:47.062172759 +0000 UTC m=+52.545859842 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.079948 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9czjg"] Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.080555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.080667 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.096229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.096312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.096337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.096364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.096386 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.099614 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.112547 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.113601 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:16:03.311592227 +0000 UTC Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.129615 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.143161 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.146459 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.146561 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.146647 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.146724 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.146838 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.146942 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.158729 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.162420 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.162532 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlfx\" (UniqueName: \"kubernetes.io/projected/be80aa92-329a-4f72-9dbb-b717f533fffb-kube-api-access-2xlfx\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.170812 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.181201 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.195150 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.198853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.198900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.198910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.198925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.198938 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.205760 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.217557 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.229061 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.240475 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.255501 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.263043 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.263124 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlfx\" (UniqueName: \"kubernetes.io/projected/be80aa92-329a-4f72-9dbb-b717f533fffb-kube-api-access-2xlfx\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.263397 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.263600 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:06:31.763495555 +0000 UTC m=+37.247182648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.268208 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.293825 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.295286 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlfx\" (UniqueName: \"kubernetes.io/projected/be80aa92-329a-4f72-9dbb-b717f533fffb-kube-api-access-2xlfx\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.301398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.301476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.301492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.301511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.301601 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.318430 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.404909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.404968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.405005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.405021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.405030 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.435698 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" event={"ID":"617f5804-7f6b-44cd-9f6b-2cffbd175ce2","Type":"ContainerStarted","Data":"4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.435735 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" event={"ID":"617f5804-7f6b-44cd-9f6b-2cffbd175ce2","Type":"ContainerStarted","Data":"a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.450087 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.462634 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.474913 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.484356 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.485672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.485744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.485769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.485800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.485822 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.497871 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.499376 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.502875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.502920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.502937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.502960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.502978 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.518253 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.520682 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.525533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.525622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.525648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.525676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.525695 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.535505 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.539384 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.544951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.544988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.545000 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.545014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.545028 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.550338 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.558292 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.561278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.561321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.561337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.561357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.561372 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.563835 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.576131 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.578882 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.579114 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.581269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.581505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.581725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.581925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.582106 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.587905 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.600521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.611748 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.629082 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.645602 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.672100 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7adadded1a1a1ccb3f5df14e23c1079404a7424865fbd24fd94400590815e603\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"message\\\":\\\"pping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:26.806968 6065 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 00:06:26.807025 6065 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 00:06:26.807061 6065 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 00:06:26.807062 6065 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 00:06:26.807102 6065 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:06:26.807102 6065 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 00:06:26.807110 6065 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 00:06:26.807121 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 00:06:26.807135 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:06:26.807207 6065 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 00:06:26.807222 6065 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 00:06:26.807250 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 00:06:26.807260 6065 factory.go:656] Stopping watch factory\\\\nI0127 00:06:26.807268 6065 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 00:06:26.807277 6065 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.684948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.685262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.685409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.685553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.685713 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.766705 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.767644 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: E0127 00:06:31.767737 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:06:32.767714609 +0000 UTC m=+38.251401702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.788657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.788703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.788714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.788732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.788746 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.891981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.892055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.892092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.892120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.892142 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.995227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.995262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.995273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.995289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:31 crc kubenswrapper[4786]: I0127 00:06:31.995301 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:31Z","lastTransitionTime":"2026-01-27T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.098183 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.098279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.098303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.098341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.098367 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.114686 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:00:31.693131098 +0000 UTC Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.201337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.201409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.201422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.201444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.201457 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.304890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.304942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.304959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.304984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.305000 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.407877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.407932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.407953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.407982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.408004 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.510933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.511247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.511382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.511504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.511698 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.614731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.614778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.614794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.614815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.614835 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.717336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.717376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.717385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.717401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.717414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.781070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:32 crc kubenswrapper[4786]: E0127 00:06:32.781293 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:32 crc kubenswrapper[4786]: E0127 00:06:32.781404 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:06:34.7813802 +0000 UTC m=+40.265067243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.820537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.820619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.820635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.820658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.820673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.923519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.923551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.923561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.923589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.923599 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:32Z","lastTransitionTime":"2026-01-27T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.961393 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.962629 4786 scope.go:117] "RemoveContainer" containerID="16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2" Jan 27 00:06:32 crc kubenswrapper[4786]: E0127 00:06:32.962892 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.982260 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:32 crc kubenswrapper[4786]: I0127 00:06:32.999310 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.012806 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.025562 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.026420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.026455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.026465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.026480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.026491 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.041358 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.053861 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.065423 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.080563 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.092657 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.105276 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.115471 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:46:14.359663349 +0000 UTC Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.129054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.129113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.129129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.129151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.129165 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.130221 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.146631 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.146684 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.146637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.146637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:33 crc kubenswrapper[4786]: E0127 00:06:33.146776 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:33 crc kubenswrapper[4786]: E0127 00:06:33.146975 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:33 crc kubenswrapper[4786]: E0127 00:06:33.147186 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:33 crc kubenswrapper[4786]: E0127 00:06:33.147275 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.163626 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.178980 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.198798 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.216111 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.229484 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.231282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.231325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.231334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.231352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.231363 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.334528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.334609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.334628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.334645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.334656 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.437602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.437669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.437686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.437731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.437747 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.540467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.540544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.540606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.540641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.540667 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.643763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.643809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.643822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.643842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.643856 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.746888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.746932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.746943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.746961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.746972 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.849726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.849765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.849775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.849790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.849802 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.952326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.952390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.952402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.952420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:33 crc kubenswrapper[4786]: I0127 00:06:33.952436 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:33Z","lastTransitionTime":"2026-01-27T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.055692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.055745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.055756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.055773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.055785 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.116599 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:13:20.984540817 +0000 UTC Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.158159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.158200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.158211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.158227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.158239 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.261251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.261303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.261320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.261343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.261363 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.363683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.363746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.363765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.363792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.363810 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.466040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.466110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.466121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.466138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.466149 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.569454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.569512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.569531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.569554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.569602 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.672255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.672301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.672334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.672357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.672373 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.775355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.775445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.775466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.775491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.775509 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.799676 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:34 crc kubenswrapper[4786]: E0127 00:06:34.799877 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:34 crc kubenswrapper[4786]: E0127 00:06:34.799956 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:06:38.799936097 +0000 UTC m=+44.283623150 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.878023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.878063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.878074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.878088 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.878099 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.980934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.981011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.981029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.981053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:34 crc kubenswrapper[4786]: I0127 00:06:34.981077 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:34Z","lastTransitionTime":"2026-01-27T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.085246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.085336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.085355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.085390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.085415 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.117689 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:49:08.631588589 +0000 UTC Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.147498 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.147532 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.148346 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.148608 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:35 crc kubenswrapper[4786]: E0127 00:06:35.148599 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:35 crc kubenswrapper[4786]: E0127 00:06:35.148769 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:35 crc kubenswrapper[4786]: E0127 00:06:35.148916 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:35 crc kubenswrapper[4786]: E0127 00:06:35.149017 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.162553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.174648 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.188292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.188341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.188352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.188374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.188386 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.190408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.208415 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.235001 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.259768 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.275477 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.289934 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.290671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.290703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.290714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.290728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.290738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.304548 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.322623 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.336948 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.349584 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.360550 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.369987 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.380347 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.392691 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.393186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.393217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.393228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.393246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.393257 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.495623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.495665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.495675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.495690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.495700 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.598062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.598114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.598126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.598149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.598166 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.700888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.700952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.700970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.700997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.701015 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.804155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.804209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.804225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.804247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.804260 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.906882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.909373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.909413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.909439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:35 crc kubenswrapper[4786]: I0127 00:06:35.909457 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:35Z","lastTransitionTime":"2026-01-27T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.011919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.012188 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.012307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.012380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.012450 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.114720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.114765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.114776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.114793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.114805 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.118895 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:56:49.045720061 +0000 UTC Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.217263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.217303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.217312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.217328 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.217337 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.322280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.322968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.323199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.323303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.323428 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.427015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.427055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.427074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.427098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.427117 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.530087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.530150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.530167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.530195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.530215 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.633097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.633132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.633144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.633162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.633172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.735506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.735623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.735651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.735687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.735710 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.839703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.839773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.839792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.839819 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.839841 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.943302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.943660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.943840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.943985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:36 crc kubenswrapper[4786]: I0127 00:06:36.944109 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:36Z","lastTransitionTime":"2026-01-27T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.046167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.046227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.046241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.046256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.046268 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.119099 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:15:07.637872233 +0000 UTC Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.147196 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.147236 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:37 crc kubenswrapper[4786]: E0127 00:06:37.147354 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.147213 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:37 crc kubenswrapper[4786]: E0127 00:06:37.147782 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.147851 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:37 crc kubenswrapper[4786]: E0127 00:06:37.147942 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:37 crc kubenswrapper[4786]: E0127 00:06:37.148017 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.151048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.151095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.151110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.151131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.151143 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.253110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.253195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.253212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.253234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.253253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.355525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.355592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.355609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.355632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.355648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.458263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.458341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.458357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.458399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.458507 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.561043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.561083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.561097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.561113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.561126 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.664244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.664289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.664303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.664320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.664332 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.766948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.767030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.767047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.767071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.767088 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.870500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.870541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.870555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.870593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.870608 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.973134 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.973375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.973462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.973528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:37 crc kubenswrapper[4786]: I0127 00:06:37.973607 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:37Z","lastTransitionTime":"2026-01-27T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.077679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.077769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.077788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.077820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.077839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.120406 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:41:06.011064698 +0000 UTC Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.181538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.182307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.182436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.182593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.182713 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.285684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.286000 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.286137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.286283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.286418 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.389637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.389937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.390162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.390358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.390538 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.497265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.497292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.497300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.497312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.497321 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.600425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.600489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.600510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.600537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.600558 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.703978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.704036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.704050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.704077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.704093 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.806510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.806544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.806556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.806593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.806605 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.900926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:38 crc kubenswrapper[4786]: E0127 00:06:38.901141 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:38 crc kubenswrapper[4786]: E0127 00:06:38.901477 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:06:46.9014498 +0000 UTC m=+52.385136873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.909952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.910105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.910208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.910342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:38 crc kubenswrapper[4786]: I0127 00:06:38.910435 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:38Z","lastTransitionTime":"2026-01-27T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.013082 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.013141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.013164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.013195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.013216 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.116196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.116251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.116269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.116293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.116311 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.121196 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:08:04.538196518 +0000 UTC Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.146894 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:39 crc kubenswrapper[4786]: E0127 00:06:39.147077 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.147477 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.147563 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:39 crc kubenswrapper[4786]: E0127 00:06:39.147920 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:39 crc kubenswrapper[4786]: E0127 00:06:39.148073 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.147659 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:39 crc kubenswrapper[4786]: E0127 00:06:39.148224 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.219788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.219887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.219907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.219946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.219969 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.323224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.323300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.323325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.323356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.323380 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.426723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.426870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.426891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.426921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.426939 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.529900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.529962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.529971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.529994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.530006 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.633451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.633504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.633521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.633543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.633562 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.736199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.736260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.736283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.736311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.736331 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.839223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.839284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.839324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.839357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.839380 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.943122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.943190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.943212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.943240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:39 crc kubenswrapper[4786]: I0127 00:06:39.943262 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:39Z","lastTransitionTime":"2026-01-27T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.045999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.046115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.046135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.046158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.046175 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.122127 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:49:09.962714101 +0000 UTC Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.148857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.148910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.148927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.148948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.148970 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.252349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.252405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.252422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.252446 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.252461 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.354943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.354997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.355013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.355043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.355060 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.457312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.457367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.457385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.457410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.457433 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.560865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.560967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.560990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.561027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.561051 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.664450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.664541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.664559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.664619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.664640 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.767651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.768090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.768252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.768397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.768514 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.871265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.871325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.871343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.871367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.871385 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.974466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.974828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.974978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.975125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:40 crc kubenswrapper[4786]: I0127 00:06:40.975253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:40Z","lastTransitionTime":"2026-01-27T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.078702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.078754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.078772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.078795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.078811 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.122237 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:00:10.469658792 +0000 UTC Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.147269 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.147867 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.148154 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.148412 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.149226 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.149345 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.149426 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.149560 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.181329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.181372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.181384 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.181402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.181417 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.284556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.284635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.284653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.284676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.284696 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.387623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.387681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.387699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.387722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.387739 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.490466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.490525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.490547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.490620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.490647 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.593718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.593764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.593780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.593807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.593824 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.595282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.595337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.595356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.595380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.595452 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.616430 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.622558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.622648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.622666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.622689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.622706 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.642054 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.646528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.646636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.646662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.646693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.646715 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.667321 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.672156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.672230 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.672256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.672286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.672304 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.692160 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.696605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.696682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.696701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.696726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.696743 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.715978 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:41 crc kubenswrapper[4786]: E0127 00:06:41.716265 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.718729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.718795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.718869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.718956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.718988 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.823154 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.823219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.823253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.823285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.823306 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.926552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.926628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.926641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.926661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:41 crc kubenswrapper[4786]: I0127 00:06:41.926675 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:41Z","lastTransitionTime":"2026-01-27T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.029500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.029530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.029538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.029551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.029561 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.099914 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.111776 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.119982 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.123195 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:16:05.797013317 +0000 UTC Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.132643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.132706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.132731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.132758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.132779 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.133265 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.149454 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.164175 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.183652 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.212807 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.231730 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.235317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.235377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.235398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.235426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.235444 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.249418 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.269774 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.285161 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.303038 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.324030 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.338483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.338534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.338546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.338590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.338606 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.343040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.361593 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.376097 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.392198 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.441973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.442078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.442096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.442120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.442137 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.544774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.544899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.544917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.544943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.544977 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.647838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.647961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.647991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.648017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.648035 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.750660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.750717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.750735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.750758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.750779 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.853952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.854016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.854038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.854065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.854086 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.957243 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.957304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.957327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.957357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:42 crc kubenswrapper[4786]: I0127 00:06:42.957377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:42Z","lastTransitionTime":"2026-01-27T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.060841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.060895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.060913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.060936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.060952 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.123669 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:12:35.345677558 +0000 UTC Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.147087 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:43 crc kubenswrapper[4786]: E0127 00:06:43.147286 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.147980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.148014 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.148181 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:43 crc kubenswrapper[4786]: E0127 00:06:43.148348 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:43 crc kubenswrapper[4786]: E0127 00:06:43.148497 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:43 crc kubenswrapper[4786]: E0127 00:06:43.148694 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.165620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.166019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.166098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.166124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.166175 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.269603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.269666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.269684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.269708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.269725 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.372935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.373503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.373532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.373564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.373623 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.476261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.476327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.476348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.476378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.476401 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.579541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.579597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.579610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.579625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.579637 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.682072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.682163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.682179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.682200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.682215 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.785421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.785482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.785503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.785526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.785544 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.888517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.888614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.888633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.888660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.888685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.991422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.991464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.991479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.991502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:43 crc kubenswrapper[4786]: I0127 00:06:43.991518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:43Z","lastTransitionTime":"2026-01-27T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.093679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.093762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.093786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.093816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.093839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.124140 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:59:56.994755026 +0000 UTC Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.196498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.196799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.197026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.197234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.197431 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.300657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.300718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.300736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.300760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.300777 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.404231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.404288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.404308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.404330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.404353 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.506372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.506423 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.506440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.506462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.506479 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.609008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.609080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.609104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.609130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.609147 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.712772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.713165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.713337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.713507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.713790 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.816684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.817099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.817364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.817531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.817723 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.920424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.920487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.920505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.920527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:44 crc kubenswrapper[4786]: I0127 00:06:44.920549 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:44Z","lastTransitionTime":"2026-01-27T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.022721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.022786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.022944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.022972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.022989 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.124322 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:19:55.193397076 +0000 UTC Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.125631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.125664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.125676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.125693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.125705 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.147023 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:45 crc kubenswrapper[4786]: E0127 00:06:45.147334 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.147422 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.147465 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.147430 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:45 crc kubenswrapper[4786]: E0127 00:06:45.147779 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:45 crc kubenswrapper[4786]: E0127 00:06:45.147636 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:45 crc kubenswrapper[4786]: E0127 00:06:45.148098 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.166967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.189997 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.209865 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.227006 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.228853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.228895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.228911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.228932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.228947 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.243085 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.265068 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.282704 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.297093 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.311733 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.327377 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.331672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.331731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.331751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.331778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.331796 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.351313 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.374867 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.392533 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.408848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.426223 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.434846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.434883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.434895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.434912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.434927 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.443105 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.456104 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:45Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.537201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.537262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.537279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.537302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.537321 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.640454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.640543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.640602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.640638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.640668 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.744194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.744256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.744276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.744302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.744379 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.847412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.847504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.847523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.847548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.847607 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.950462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.950598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.950621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.950646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:45 crc kubenswrapper[4786]: I0127 00:06:45.950664 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:45Z","lastTransitionTime":"2026-01-27T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.052798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.052827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.052836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.052849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.052857 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.124714 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:21:27.308522913 +0000 UTC Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.148132 4786 scope.go:117] "RemoveContainer" containerID="16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.157251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.157302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.157319 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.157340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.157356 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.261346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.261741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.261760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.261782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.261799 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.364651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.364739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.364770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.364801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.364825 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.468207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.468236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.468244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.468257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.468266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.535270 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/1.log" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.539962 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.540839 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.570872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.570921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.570941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.570963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.570979 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.581040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.599704 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.619995 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.635679 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.650306 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.668126 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.673939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.674180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.674312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.674424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.674526 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.693417 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.708167 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.725849 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.744116 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.765009 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.776808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.776880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.776903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.776935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.776957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.778304 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.788515 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.797810 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.811504 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.828899 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.840592 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:46Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.879122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.879167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.879180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.879200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.879214 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.933850 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:46 crc kubenswrapper[4786]: E0127 00:06:46.933977 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:46 crc kubenswrapper[4786]: E0127 00:06:46.934038 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:07:02.934025716 +0000 UTC m=+68.417712759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.981418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.981641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.981740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.981808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:46 crc kubenswrapper[4786]: I0127 00:06:46.981870 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:46Z","lastTransitionTime":"2026-01-27T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.035100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.035380 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:07:19.035357983 +0000 UTC m=+84.519045056 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.084128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.084199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.084224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.084251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.084273 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.124893 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:37:57.175304869 +0000 UTC Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.136268 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.136303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.136328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.136349 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136439 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136487 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136500 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136509 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136548 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:07:19.136517705 +0000 UTC m=+84.620204778 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136449 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136619 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:07:19.136598448 +0000 UTC m=+84.620285531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136642 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136668 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136733 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:07:19.136711481 +0000 UTC m=+84.620398554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136451 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.136802 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:07:19.136784393 +0000 UTC m=+84.620471476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.146833 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.146878 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.146899 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.146847 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.147033 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.147160 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.147281 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.147318 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.187194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.187285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.187302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.187325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.187342 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.290403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.290473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.290492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.290516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.290535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.394288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.394361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.394378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.394405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.394439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.498094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.498144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.498159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.498178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.498191 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.547779 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/2.log" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.548939 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/1.log" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.553503 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" exitCode=1 Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.553719 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.554097 4786 scope.go:117] "RemoveContainer" containerID="16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.555011 4786 scope.go:117] "RemoveContainer" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" Jan 27 00:06:47 crc kubenswrapper[4786]: E0127 00:06:47.555348 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.578415 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.595336 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.600976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.601202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.601363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.601504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.601684 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.610408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.620987 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.636029 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.657679 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b4e1942019b6df28e7277cd58e432487f330ce862129efa38ae8dccb1027d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:28Z\\\",\\\"message\\\":\\\"27 00:06:28.426218 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 00:06:28.426240 6193 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:28.426274 6193 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0127 00:06:28.426290 6193 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0127 00:06:28.426301 6193 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0127 00:06:28.426321 6193 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0127 00:06:28.426605 6193 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.426793 6193 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427479 6193 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.427644 6193 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:06:28.429691 6193 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:28.429750 6193 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:28.429835 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.670189 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.685667 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.698705 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.704684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.704722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.704732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.704748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.704761 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.713381 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.725469 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.741168 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.763307 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.784880 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.803319 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.807102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.807139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.807152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.807168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.807179 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.819966 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.834069 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:47Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.909787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.909846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.909862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.909885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:47 crc kubenswrapper[4786]: I0127 00:06:47.909906 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:47Z","lastTransitionTime":"2026-01-27T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.012867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.012932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.012949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.012973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.012989 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.115861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.115906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.115917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.115934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.115947 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.125048 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:10:40.298384468 +0000 UTC Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.219277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.219342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.219361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.219390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.219409 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.321545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.322118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.322143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.322173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.322196 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.424872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.424938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.424955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.424981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.424998 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.528126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.528198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.528223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.528253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.528277 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.561425 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/2.log" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.567090 4786 scope.go:117] "RemoveContainer" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" Jan 27 00:06:48 crc kubenswrapper[4786]: E0127 00:06:48.567341 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.587900 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.607324 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.620952 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.631771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.631833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.631852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.631877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.631897 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.640272 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.659679 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.677067 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.696136 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.717365 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.735222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.735290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.735315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.735347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.735371 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.741306 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.766523 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.788375 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.814964 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.838617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.838686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.838708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.838737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.838759 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.849258 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.868949 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.884240 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.901015 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.922939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.941770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.941834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.941853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.941877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:48 crc kubenswrapper[4786]: I0127 00:06:48.941897 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:48Z","lastTransitionTime":"2026-01-27T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.044544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.044638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.044661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.044690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.044712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.125293 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:26:59.610528133 +0000 UTC Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.146760 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.146891 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:49 crc kubenswrapper[4786]: E0127 00:06:49.146980 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.146787 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.147017 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:49 crc kubenswrapper[4786]: E0127 00:06:49.147216 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:49 crc kubenswrapper[4786]: E0127 00:06:49.147385 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:49 crc kubenswrapper[4786]: E0127 00:06:49.147505 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.148283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.148345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.148366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.148395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.148417 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.251648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.251730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.251751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.251773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.251789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.354767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.354835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.354860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.354888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.354911 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.458113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.458184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.458208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.458235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.458258 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.561714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.561767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.561783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.561806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.561822 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.664496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.664633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.664659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.664695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.664720 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.768744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.768805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.768823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.768849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.768866 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.870974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.871042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.871065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.871089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.871106 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.974268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.974322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.974344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.974368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:49 crc kubenswrapper[4786]: I0127 00:06:49.974385 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:49Z","lastTransitionTime":"2026-01-27T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.077995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.078057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.078075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.078158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.078176 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.125469 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:46:53.636569462 +0000 UTC Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.181940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.183332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.183520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.183690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.183842 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.287003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.287055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.287071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.287094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.287111 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.389879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.389938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.389959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.389988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.390009 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.492967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.493007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.493019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.493036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.493047 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.595543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.595619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.595637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.595660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.595677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.698848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.698901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.698919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.698941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.698958 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.802144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.802204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.802221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.802246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.802266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.905031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.905090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.905107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.905132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:50 crc kubenswrapper[4786]: I0127 00:06:50.905150 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:50Z","lastTransitionTime":"2026-01-27T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.007762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.007833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.007856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.007883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.007907 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.111427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.111516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.111536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.111562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.111634 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.126135 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:24:18.009662711 +0000 UTC Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.146511 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.146599 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.146684 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.146775 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.146809 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.146943 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.147311 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.147149 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.214363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.214405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.214420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.214440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.214453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.317667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.317719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.317736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.317759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.317780 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.421098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.421148 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.421165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.421187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.421208 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.524468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.524512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.524528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.524551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.524599 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.626961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.627017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.627035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.627058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.627074 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.729366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.729735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.729954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.730116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.730276 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.833723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.833992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.834190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.834382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.834636 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.937956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.938017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.938033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.938057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.938075 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.957348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.957400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.957418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.957441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.957457 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.975527 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.981368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.981555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.981763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.981915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:51 crc kubenswrapper[4786]: I0127 00:06:51.982071 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:51Z","lastTransitionTime":"2026-01-27T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:51 crc kubenswrapper[4786]: E0127 00:06:51.998434 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.002824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.002874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.002890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.002911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.002926 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: E0127 00:06:52.017500 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:52Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.022598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.022639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.022650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.022668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.022680 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: E0127 00:06:52.043775 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:52Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.048301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.048477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.048660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.048776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.048934 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: E0127 00:06:52.065769 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:52Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:52 crc kubenswrapper[4786]: E0127 00:06:52.065933 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.067996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.068042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.068058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.068079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.068095 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.126341 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:59:06.351615884 +0000 UTC Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.171826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.171867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.171876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.171892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.171901 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.275058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.275351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.275469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.275620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.275737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.379047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.379385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.379741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.380058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.380375 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.483609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.483666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.483687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.483710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.483728 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.587068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.587147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.587166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.587681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.587737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.691178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.691255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.691280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.691312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.691335 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.794063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.794122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.794318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.794346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.794370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.896980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.897037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.897062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.897089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.897110 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:52Z","lastTransitionTime":"2026-01-27T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.999918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.999974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:52 crc kubenswrapper[4786]: I0127 00:06:52.999990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.000012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.000029 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.102357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.102402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.102414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.102433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.102446 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.127179 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:04:03.948038721 +0000 UTC Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.146645 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.146717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.146752 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.146662 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:53 crc kubenswrapper[4786]: E0127 00:06:53.146843 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:53 crc kubenswrapper[4786]: E0127 00:06:53.146940 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:53 crc kubenswrapper[4786]: E0127 00:06:53.147139 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:53 crc kubenswrapper[4786]: E0127 00:06:53.147275 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.205843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.205912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.205943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.205983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.206006 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.308790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.309120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.309292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.309468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.309637 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.412313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.412362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.412378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.412402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.412418 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.515776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.515851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.515876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.515904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.515925 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.618686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.618735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.618746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.618763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.618773 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.721699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.722418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.722650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.722849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.723042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.825693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.825753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.825773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.825799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.825839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.928375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.928435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.928453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.928478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:53 crc kubenswrapper[4786]: I0127 00:06:53.928495 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:53Z","lastTransitionTime":"2026-01-27T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.032176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.032241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.032257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.032288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.032306 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.127799 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:07:32.845301284 +0000 UTC Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.135014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.135080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.135097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.135120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.135139 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.237792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.237859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.237876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.237899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.237917 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.341528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.341637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.341659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.341687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.341712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.449766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.449835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.449861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.449892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.449915 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.553672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.553748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.553769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.553796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.553817 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.656037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.656089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.656103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.656120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.656132 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.758728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.758787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.758805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.758832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.758850 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.861598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.861647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.861658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.861675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.861687 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.964690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.964738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.964751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.964768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:54 crc kubenswrapper[4786]: I0127 00:06:54.964782 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:54Z","lastTransitionTime":"2026-01-27T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.068038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.068086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.068097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.068133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.068144 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.128932 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:41:52.557372344 +0000 UTC Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.146503 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.146544 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:55 crc kubenswrapper[4786]: E0127 00:06:55.146671 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.146711 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:55 crc kubenswrapper[4786]: E0127 00:06:55.146825 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:55 crc kubenswrapper[4786]: E0127 00:06:55.146923 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.147467 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:55 crc kubenswrapper[4786]: E0127 00:06:55.147905 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.169901 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.171172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.171518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.171700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.172256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.172484 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.185805 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.201083 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.218465 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.245073 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.264513 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.275021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.275273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.275460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.275619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.275751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.278311 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.296967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.320768 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.339049 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.352647 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.364889 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.380896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.381244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.381428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.381522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.381632 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.381549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.399127 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.412077 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.427430 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.447906 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.484167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.484241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.484265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.484293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.484315 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.586976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.587212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.587509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.587692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.588129 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.690662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.691008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.691255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.691410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.691557 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.794668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.794718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.794738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.794762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.794781 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.897999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.898227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.898388 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.898479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:55 crc kubenswrapper[4786]: I0127 00:06:55.898500 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:55Z","lastTransitionTime":"2026-01-27T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.001299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.001351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.001365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.001383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.001395 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.104680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.104966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.105064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.105158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.105251 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.129236 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:17:23.886198177 +0000 UTC Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.208161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.208227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.208248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.208275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.208296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.311398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.311459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.311480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.311509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.311533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.415360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.415421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.415456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.415492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.415517 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.518799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.519220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.519433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.519629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.519769 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.622969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.623277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.623453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.623621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.623760 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.726771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.726842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.726858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.726888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.726908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.830048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.830106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.830123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.830146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.830164 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.932934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.932989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.933002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.933020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:56 crc kubenswrapper[4786]: I0127 00:06:56.933038 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:56Z","lastTransitionTime":"2026-01-27T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.035857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.035921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.035943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.035977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.036001 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.130455 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:00:54.498949021 +0000 UTC Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.139708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.139754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.139773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.139796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.139813 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.147501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.147597 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.147875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:57 crc kubenswrapper[4786]: E0127 00:06:57.148046 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.148065 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:57 crc kubenswrapper[4786]: E0127 00:06:57.148369 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:57 crc kubenswrapper[4786]: E0127 00:06:57.148520 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:57 crc kubenswrapper[4786]: E0127 00:06:57.148690 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.242782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.242835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.242851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.242878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.242896 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.346246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.346307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.346324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.346348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.346366 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.449298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.449413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.449429 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.449447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.449459 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.552076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.552130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.552140 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.552155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.552164 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.655035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.655098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.655117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.655143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.655163 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.758414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.758599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.758628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.758655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.758673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.861870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.861915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.861931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.861952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.861973 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.964674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.964722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.964738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.964760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:57 crc kubenswrapper[4786]: I0127 00:06:57.964777 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:57Z","lastTransitionTime":"2026-01-27T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.069452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.069556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.069609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.069633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.069653 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.131852 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:18:26.949172368 +0000 UTC Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.172259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.172298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.172316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.172337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.172353 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.275707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.275752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.275763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.275781 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.275793 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.378499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.378548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.378562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.378599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.378613 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.480940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.480976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.480987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.481001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.481013 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.583992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.584039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.584051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.584069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.584080 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.687716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.687783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.687811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.687839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.687860 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.790530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.790658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.790718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.790752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.790775 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.893564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.893649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.893666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.893689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.893707 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.996854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.996908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.996924 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.996946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:58 crc kubenswrapper[4786]: I0127 00:06:58.996967 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:58Z","lastTransitionTime":"2026-01-27T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.099878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.099941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.099963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.099989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.100007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.132454 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:58:08.988346299 +0000 UTC Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.147035 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.147085 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.147120 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.147073 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:06:59 crc kubenswrapper[4786]: E0127 00:06:59.148047 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:06:59 crc kubenswrapper[4786]: E0127 00:06:59.148210 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:06:59 crc kubenswrapper[4786]: E0127 00:06:59.148314 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:06:59 crc kubenswrapper[4786]: E0127 00:06:59.148412 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.203298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.203361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.203374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.203391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.203401 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.306974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.307023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.307041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.307063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.307080 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.409411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.409457 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.409474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.409495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.409509 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.512056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.512098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.512114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.512135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.512151 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.615196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.615240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.615251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.615266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.615279 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.718177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.718228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.718241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.718262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.718275 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.820632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.821024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.821042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.821077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.821095 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.922909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.922968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.922985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.923006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:06:59 crc kubenswrapper[4786]: I0127 00:06:59.923018 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:06:59Z","lastTransitionTime":"2026-01-27T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.025536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.025590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.025601 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.025618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.025630 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.127954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.127996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.128007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.128023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.128037 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.133329 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:20:57.185515112 +0000 UTC Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.230967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.231018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.231031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.231048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.231059 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.334622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.334721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.334739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.334792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.334809 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.437385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.437444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.437461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.437483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.437501 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.540233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.540273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.540284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.540298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.540311 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.642965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.643007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.643016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.643032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.643042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.745653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.745700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.745712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.745730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.745743 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.848225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.848273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.848285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.848300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.848311 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.950620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.950668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.950683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.950707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:00 crc kubenswrapper[4786]: I0127 00:07:00.950723 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:00Z","lastTransitionTime":"2026-01-27T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.052854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.052894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.052908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.052925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.052936 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.134066 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:39:58.480276084 +0000 UTC Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.147426 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.147426 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.147528 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:01 crc kubenswrapper[4786]: E0127 00:07:01.147678 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.147696 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:01 crc kubenswrapper[4786]: E0127 00:07:01.147961 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:01 crc kubenswrapper[4786]: E0127 00:07:01.148085 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:01 crc kubenswrapper[4786]: E0127 00:07:01.148133 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.148375 4786 scope.go:117] "RemoveContainer" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" Jan 27 00:07:01 crc kubenswrapper[4786]: E0127 00:07:01.148646 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.154397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.154425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.154435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.154448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.154457 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.256454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.256493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.256505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.256521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.256532 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.358703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.358796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.358815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.359118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.359361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.462103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.462142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.462153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.462167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.462178 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.563914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.563950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.563959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.563971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.563980 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.666139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.666174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.666182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.666194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.666204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.768668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.768711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.768720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.768735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.768748 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.874055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.874119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.874137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.874160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.874177 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.977172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.977230 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.977249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.977273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:01 crc kubenswrapper[4786]: I0127 00:07:01.977295 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:01Z","lastTransitionTime":"2026-01-27T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.079805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.079895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.079912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.079934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.079953 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.134818 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:52:07.754960946 +0000 UTC Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.183211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.183263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.183281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.183306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.183327 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.285156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.285219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.285238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.285260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.285277 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.388117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.388168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.388179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.388194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.388204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.462995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.463043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.463054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.463068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.463077 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.480688 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.485026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.485044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.485052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.485064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.485071 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.496555 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.499955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.499999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.500011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.500028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.500062 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.513507 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.518060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.518095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.518105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.518118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.518127 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.530918 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.569040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.569087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.569111 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.569128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.569138 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.582811 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:02 crc kubenswrapper[4786]: E0127 00:07:02.582917 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.585055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.585079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.585087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.585100 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.585110 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.687794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.687836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.687846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.687860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.687872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.789986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.790069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.790083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.790105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.790118 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.892394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.892452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.892467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.892483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.892493 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.995237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.995279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.995287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.995300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:02 crc kubenswrapper[4786]: I0127 00:07:02.995312 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:02Z","lastTransitionTime":"2026-01-27T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.005684 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.005793 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.005846 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:07:35.005831346 +0000 UTC m=+100.489518389 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.098120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.098170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.098186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.098210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.098228 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.135708 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:36:31.488659751 +0000 UTC Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.147535 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.147652 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.147695 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.147652 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.148070 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.148192 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.148428 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:03 crc kubenswrapper[4786]: E0127 00:07:03.148559 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.200005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.200039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.200048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.200061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.200071 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.303100 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.303144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.303160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.303182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.303197 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.407127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.407179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.407190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.407208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.407221 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.510760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.510831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.510849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.510874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.510893 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.614040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.614144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.614162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.614187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.614204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.716739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.716785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.716797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.716815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.716827 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.846453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.846492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.846501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.846514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.846524 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.948467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.948520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.948532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.948548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:03 crc kubenswrapper[4786]: I0127 00:07:03.948560 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:03Z","lastTransitionTime":"2026-01-27T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.050559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.050651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.050676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.050705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.050727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.136670 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:26:06.90423585 +0000 UTC Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.153495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.153546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.153559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.153595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.153611 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.255886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.255927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.255937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.255950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.255964 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.359227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.359306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.359337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.359367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.359388 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.462209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.462261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.462279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.462301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.462318 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.564988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.565088 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.565179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.565215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.565285 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.614993 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/0.log" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.615127 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d790bab-fb2b-4745-a195-65359a962f52" containerID="1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81" exitCode=1 Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.615168 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerDied","Data":"1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.615700 4786 scope.go:117] "RemoveContainer" containerID="1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.631240 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.642853 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.655545 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.667616 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.673732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.673766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.673774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.673788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.673797 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.686377 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.711552 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.730422 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.744971 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.760292 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.774722 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.776805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.776854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.776870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.776888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.776900 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.785189 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.798441 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.812918 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.827521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.839807 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.851457 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.862849 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:04Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.879169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.879223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.879239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.879259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.879275 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.980864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.980895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.980903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.980915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:04 crc kubenswrapper[4786]: I0127 00:07:04.980924 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:04Z","lastTransitionTime":"2026-01-27T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.083439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.084041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.084109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.084168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.084225 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.137111 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:12:30.01076516 +0000 UTC Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.149070 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:05 crc kubenswrapper[4786]: E0127 00:07:05.149252 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.149509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:05 crc kubenswrapper[4786]: E0127 00:07:05.149666 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.149873 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:05 crc kubenswrapper[4786]: E0127 00:07:05.150004 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.150214 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:05 crc kubenswrapper[4786]: E0127 00:07:05.150364 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.163143 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.181057 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.186383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.186538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.186675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.186810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.186900 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.193208 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.208007 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.215973 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.225227 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.238214 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.254767 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.266168 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.278265 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.289060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.289089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.289102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.289119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.289130 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.292240 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.305133 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.314865 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.323224 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.333730 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.346473 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.364758 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.391310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.391343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.391354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.391370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.391381 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.494079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.494278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.494342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.494413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.494478 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.596737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.596774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.596787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.596807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.596819 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.619556 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/0.log" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.619635 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerStarted","Data":"a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.638744 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.653485 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.665761 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.678256 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.692486 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.699871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.699917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.699934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.699967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.699983 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.706395 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.720796 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.736994 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.762774 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.776007 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.794344 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.802199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.802233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.802241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.802259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.802272 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.819266 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.840554 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.857445 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.874478 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.900253 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.904450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.904608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.904739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.904832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.904912 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:05Z","lastTransitionTime":"2026-01-27T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:05 crc kubenswrapper[4786]: I0127 00:07:05.914589 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:05Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.007831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.007899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.007908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.007930 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.007941 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.111109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.112285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.112422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.112548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.112712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.138175 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:29:12.582090068 +0000 UTC Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.216377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.216466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.216478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.216495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.216511 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.318834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.318878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.318887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.318903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.318913 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.421446 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.421489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.421498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.421512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.421521 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.524174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.524223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.524239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.524264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.524282 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.626383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.626621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.626751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.626841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.626933 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.729999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.730037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.730049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.730063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.730074 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.833297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.833332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.833340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.833353 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.833361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.936452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.936498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.936508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.936523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:06 crc kubenswrapper[4786]: I0127 00:07:06.936534 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:06Z","lastTransitionTime":"2026-01-27T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.039395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.039465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.039477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.039492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.039504 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.138389 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:21:48.038833644 +0000 UTC Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.142317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.142377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.142400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.142427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.142449 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.146702 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.146749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.146774 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:07 crc kubenswrapper[4786]: E0127 00:07:07.146885 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.146953 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:07 crc kubenswrapper[4786]: E0127 00:07:07.147088 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:07 crc kubenswrapper[4786]: E0127 00:07:07.147179 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:07 crc kubenswrapper[4786]: E0127 00:07:07.147286 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.245254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.245290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.245301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.245317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.245334 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.348046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.348083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.348095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.348110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.348121 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.450722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.450764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.450776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.450792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.450804 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.553478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.553560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.553609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.553635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.553654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.656766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.656824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.656846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.656873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.656894 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.759643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.759716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.759740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.759767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.759789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.862354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.862393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.862404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.862420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.862431 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.965007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.965054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.965070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.965090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:07 crc kubenswrapper[4786]: I0127 00:07:07.965107 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:07Z","lastTransitionTime":"2026-01-27T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.067337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.067395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.067412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.067435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.067451 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.139248 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:23:06.589356635 +0000 UTC Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.169673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.169727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.169744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.169766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.169783 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.272403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.272471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.272496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.272525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.272544 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.375124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.375212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.375231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.375255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.375271 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.478603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.478667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.478687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.478713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.478733 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.581304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.581361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.581377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.581401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.581419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.684218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.684274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.684286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.684303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.684316 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.787616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.787679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.787689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.787711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.787727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.889841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.889904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.889915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.889935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.889949 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.993425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.993492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.993509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.993535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:08 crc kubenswrapper[4786]: I0127 00:07:08.993552 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:08Z","lastTransitionTime":"2026-01-27T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.096309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.096361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.096372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.096389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.096402 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.139847 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:00:54.925497649 +0000 UTC Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.147643 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.147701 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.147786 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:09 crc kubenswrapper[4786]: E0127 00:07:09.147840 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.147897 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:09 crc kubenswrapper[4786]: E0127 00:07:09.147968 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:09 crc kubenswrapper[4786]: E0127 00:07:09.148069 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:09 crc kubenswrapper[4786]: E0127 00:07:09.148154 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.199202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.199248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.199261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.199278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.199291 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.302110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.302248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.302262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.302280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.302292 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.404712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.404749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.404757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.404770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.404780 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.507244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.507308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.507330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.507357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.507378 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.609414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.609454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.609467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.609483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.609496 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.712508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.712556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.712586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.712606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.712619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.817133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.817259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.817280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.817313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.817331 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.920740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.920815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.920836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.920858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:09 crc kubenswrapper[4786]: I0127 00:07:09.920876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:09Z","lastTransitionTime":"2026-01-27T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.022739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.022818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.022841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.022865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.022883 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.126414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.126476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.126499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.126525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.126546 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.140042 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:07:39.947969199 +0000 UTC Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.228782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.228857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.228882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.228912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.228940 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.331158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.331231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.331256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.331337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.331419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.434760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.434785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.434794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.434806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.434814 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.537205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.537281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.537304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.537331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.537349 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.639725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.639764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.639780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.639802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.639820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.743559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.743625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.743637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.743690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.743710 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.846426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.846485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.846503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.846524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.846541 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.950322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.950402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.950428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.950458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:10 crc kubenswrapper[4786]: I0127 00:07:10.950482 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:10Z","lastTransitionTime":"2026-01-27T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.053973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.054036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.054047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.054073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.054087 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.140626 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:22:50.201293026 +0000 UTC Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.147119 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.147205 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.147242 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.147259 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:11 crc kubenswrapper[4786]: E0127 00:07:11.147365 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:11 crc kubenswrapper[4786]: E0127 00:07:11.147617 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:11 crc kubenswrapper[4786]: E0127 00:07:11.147792 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:11 crc kubenswrapper[4786]: E0127 00:07:11.147914 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.156982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.157042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.157056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.157079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.157094 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.163724 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.260132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.260631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.260705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.260737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.260759 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.363651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.363710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.363723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.363742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.363756 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.466318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.466372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.466382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.466397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.466409 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.569722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.569789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.569810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.569834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.569851 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.672763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.672818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.672835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.672857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.672877 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.775938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.775995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.776012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.776036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.776053 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.878904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.878955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.878971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.878993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.879010 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.981545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.981607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.981620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.981638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:11 crc kubenswrapper[4786]: I0127 00:07:11.981678 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:11Z","lastTransitionTime":"2026-01-27T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.084992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.085047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.085064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.085087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.085132 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.141552 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:34:43.596769146 +0000 UTC Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.187238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.187380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.187400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.187422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.187439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.290697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.290746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.290762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.290789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.290813 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.393849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.393912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.393936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.393966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.393982 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.496761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.496804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.496816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.496831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.496843 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.601027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.601099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.601117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.601143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.601160 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.704093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.704136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.704153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.704179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.704197 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.731677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.731742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.731759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.731778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.731793 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.752552 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.759103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.759533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.759741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.759898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.760068 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.779425 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.783557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.783680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.783778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.783852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.783916 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.797551 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.806239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.806346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.806459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.806586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.806686 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.824437 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.828749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.828849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.828916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.828986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.829049 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.845384 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:12 crc kubenswrapper[4786]: E0127 00:07:12.845644 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.847518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.847588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.847610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.847637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.847654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.950789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.950858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.950869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.950892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:12 crc kubenswrapper[4786]: I0127 00:07:12.950908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:12Z","lastTransitionTime":"2026-01-27T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.054216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.054268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.054284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.054308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.054326 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.142167 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:39:12.809932029 +0000 UTC Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.147214 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.147299 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:13 crc kubenswrapper[4786]: E0127 00:07:13.147404 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.147509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.147534 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:13 crc kubenswrapper[4786]: E0127 00:07:13.147766 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:13 crc kubenswrapper[4786]: E0127 00:07:13.147860 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:13 crc kubenswrapper[4786]: E0127 00:07:13.148031 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.149250 4786 scope.go:117] "RemoveContainer" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.157498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.157545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.157562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.157613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.157632 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.261331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.261379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.261390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.261407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.261419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.364186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.364235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.364250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.364273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.364288 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.466789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.466857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.466875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.466898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.466925 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.578158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.578189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.578197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.578210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.578219 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.647764 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/2.log" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.650388 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.651403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.683941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.683979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.683990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.684006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.684017 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.696376 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.706725 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.721521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.738491 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.751441 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.762234 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.774585 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786134 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786148 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786158 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.786602 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.796585 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.806617 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.816839 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.830119 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.848489 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.861817 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.875747 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.885877 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.888215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.888255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.888265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.888280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.888289 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.900639 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.914325 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:13Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.993956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.994037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.994054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.994081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:13 crc kubenswrapper[4786]: I0127 00:07:13.994103 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:13Z","lastTransitionTime":"2026-01-27T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.096020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.096055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.096062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.096074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.096083 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.142597 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:38:12.657655242 +0000 UTC Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.198433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.198479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.198491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.198506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.198517 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.302074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.302144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.302166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.302196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.302219 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.405260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.405318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.405342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.405371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.405394 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.508174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.508228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.508247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.508269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.508287 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.610902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.610984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.611010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.611043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.611069 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.656544 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/3.log" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.657546 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/2.log" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.661106 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" exitCode=1 Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.661168 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.661244 4786 scope.go:117] "RemoveContainer" containerID="e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.662671 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:07:14 crc kubenswrapper[4786]: E0127 00:07:14.663034 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.682081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.701526 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.714603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.714645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.714662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.714686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.714703 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.715882 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.733946 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.753904 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.769127 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.789661 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.804924 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.817409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.817475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.817493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.817518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.817536 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.824103 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.846329 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.877497 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:13Z\\\",\\\"message\\\":\\\") from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 00:07:13.967026 6812 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:07:13.967140 6812 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.967170 6812 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.968094 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:07:13.968187 6812 factory.go:656] Stopping watch factory\\\\nI0127 00:07:13.968216 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:07:13.976666 6812 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 00:07:13.976712 6812 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 00:07:13.976810 6812 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:07:13.976855 6812 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:07:13.976959 6812 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.897318 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.912097 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.920160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.920213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.920233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.920260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.920277 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:14Z","lastTransitionTime":"2026-01-27T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.931448 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.949636 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.969393 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:14 crc kubenswrapper[4786]: I0127 00:07:14.983457 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:14.999880 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:14Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.022756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.022815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.022836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.022865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.022883 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.125932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.125989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.126008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.126031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.126049 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.143407 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:33:58.920614425 +0000 UTC Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.146897 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.146952 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.146978 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.146925 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:15 crc kubenswrapper[4786]: E0127 00:07:15.147076 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:15 crc kubenswrapper[4786]: E0127 00:07:15.147181 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:15 crc kubenswrapper[4786]: E0127 00:07:15.147331 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:15 crc kubenswrapper[4786]: E0127 00:07:15.147453 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.165232 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.182240 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.200745 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.228899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.228994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.229018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.229050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.229074 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.228942 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.263170 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b22eb2d49a5092ec368836f0234bc277a4fbf0f750cb33c11c0db2e18f070b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:06:47Z\\\",\\\"message\\\":\\\"vr in Admin Network Policy controller\\\\nI0127 00:06:47.160648 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-76f77b778f-7k2vr Admin Network Policy controller: took 11.33µs\\\\nI0127 00:06:47.160661 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t in Admin Network Policy controller\\\\nI0127 00:06:47.160670 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-879f6c89f-l7x9t Admin Network Policy controller: took 10.41µs\\\\nI0127 00:06:47.160684 6417 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw in Admin Network Policy controller\\\\nI0127 00:06:47.160694 6417 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw Admin Network Policy controller: took 10.701µs\\\\nI0127 00:06:47.160781 6417 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 00:06:47.160874 6417 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 00:06:47.160914 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:06:47.161100 6417 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:06:47.161182 6417 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:13Z\\\",\\\"message\\\":\\\") from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 00:07:13.967026 6812 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:07:13.967140 6812 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.967170 6812 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.968094 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:07:13.968187 6812 factory.go:656] Stopping watch factory\\\\nI0127 00:07:13.968216 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:07:13.976666 6812 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 00:07:13.976712 6812 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 00:07:13.976810 6812 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:07:13.976855 6812 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:07:13.976959 6812 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.283931 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.305968 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.326039 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.338849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.338909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.338932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.338961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.338986 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.347205 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.363879 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.380694 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.394644 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.414225 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.428756 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.441136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.441178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.441192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.441211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.441224 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.444068 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.460165 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.481826 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.502902 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.544883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.545001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.545023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.545233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.545253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.647993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.648051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.648068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.648090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.648107 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.666946 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/3.log" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.671680 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:07:15 crc kubenswrapper[4786]: E0127 00:07:15.671925 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.694557 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.712400 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.723554 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.735139 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.750220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.750276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.750294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.750320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.750343 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.755456 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.772490 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.800151 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:13Z\\\",\\\"message\\\":\\\") from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 00:07:13.967026 6812 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:07:13.967140 6812 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.967170 6812 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.968094 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:07:13.968187 6812 factory.go:656] Stopping watch factory\\\\nI0127 00:07:13.968216 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:07:13.976666 6812 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 00:07:13.976712 6812 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 00:07:13.976810 6812 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:07:13.976855 6812 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:07:13.976959 6812 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.813646 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.831492 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.845610 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.852762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.852804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.852816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.852829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.852842 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.860496 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.876294 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.890166 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.905511 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.923271 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.944552 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.956905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.957087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.957119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.957211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.957240 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:15Z","lastTransitionTime":"2026-01-27T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.958347 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:15 crc kubenswrapper[4786]: I0127 00:07:15.972024 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.060929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.061000 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.061017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.061042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.061060 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.144626 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:13:57.491800246 +0000 UTC Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.164443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.164496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.164513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.164537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.164556 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.267667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.267729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.267746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.267772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.267795 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.371002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.371064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.371081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.371110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.371128 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.473895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.473958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.473979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.474005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.474023 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.577288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.577369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.577391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.577421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.577444 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.680938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.680987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.681003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.681025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.681042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.785258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.785332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.785343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.785368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.785381 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.888878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.888959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.888983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.889015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.889036 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.991148 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.991213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.991237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.991270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:16 crc kubenswrapper[4786]: I0127 00:07:16.991300 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:16Z","lastTransitionTime":"2026-01-27T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.094085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.094141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.094157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.094179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.094198 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.145395 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:15:36.874316168 +0000 UTC Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.146771 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.146812 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.146921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:17 crc kubenswrapper[4786]: E0127 00:07:17.146999 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.147027 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:17 crc kubenswrapper[4786]: E0127 00:07:17.147142 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:17 crc kubenswrapper[4786]: E0127 00:07:17.147288 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:17 crc kubenswrapper[4786]: E0127 00:07:17.147490 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.197498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.197561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.197622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.197654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.197677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.300805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.300855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.300872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.300894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.300912 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.404461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.404525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.404545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.404604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.404627 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.506983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.507012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.507022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.507037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.507047 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.609324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.609393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.609411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.609435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.609465 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.711492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.711555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.711597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.711622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.711643 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.815656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.815762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.815789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.815816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.815837 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.918357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.918439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.918463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.918493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:17 crc kubenswrapper[4786]: I0127 00:07:17.918523 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:17Z","lastTransitionTime":"2026-01-27T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.022560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.022644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.022672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.022695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.022712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.125898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.125957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.125974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.125997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.126014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.145756 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:52:54.971635033 +0000 UTC Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.229152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.229220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.229242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.229271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.229296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.332096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.332159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.332181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.332212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.332234 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.435215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.435270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.435287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.435309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.435326 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.538342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.538403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.538420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.538450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.538486 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.641253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.641327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.641352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.641376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.641393 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.744470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.744549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.744610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.744666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.744688 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.848101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.848159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.848179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.848201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.848221 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.951317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.951372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.951389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.951413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:18 crc kubenswrapper[4786]: I0127 00:07:18.951429 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:18Z","lastTransitionTime":"2026-01-27T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.053860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.053928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.053950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.053980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.054006 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.115250 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.115388 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:23.115355382 +0000 UTC m=+148.599042465 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.146556 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:59:04.193976259 +0000 UTC Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.146791 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.146859 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.146931 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.147002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.147134 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.147195 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.147249 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.147401 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.157657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.157720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.157738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.157767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.157785 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.216369 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.216434 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.216505 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.216544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216627 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216675 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216678 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216710 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216709 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216749 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216782 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216794 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216805 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 00:08:23.216777742 +0000 UTC m=+148.700464815 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216972 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:08:23.216937306 +0000 UTC m=+148.700624379 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.216998 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 00:08:23.216984978 +0000 UTC m=+148.700672051 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 00:07:19 crc kubenswrapper[4786]: E0127 00:07:19.217036 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 00:08:23.217025249 +0000 UTC m=+148.700712322 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.260614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.260674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.260696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.260723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.260747 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.363515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.363556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.363592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.363609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.363620 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.467002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.467061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.467080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.467104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.467121 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.569714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.569782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.569800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.569822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.569841 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.672674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.672784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.672808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.672837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.672857 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.775906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.775969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.775985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.776008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.776027 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.879210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.879271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.879287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.879313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.879330 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.982129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.982173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.982182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.982199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:19 crc kubenswrapper[4786]: I0127 00:07:19.982210 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:19Z","lastTransitionTime":"2026-01-27T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.085475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.085528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.085537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.085553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.085563 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.146836 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:57:59.899904225 +0000 UTC Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.162382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:20 crc kubenswrapper[4786]: E0127 00:07:20.162624 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.188718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.188758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.188774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.188862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.188884 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.291956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.292022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.292044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.292073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.292092 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.396101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.396185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.396209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.396426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.396445 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.499963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.500033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.500051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.500077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.500096 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.603417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.603921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.603942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.604003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.604033 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.707382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.707448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.707468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.707496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.707516 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.810735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.810871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.810896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.810928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.810963 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.914263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.914323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.914344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.914373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:20 crc kubenswrapper[4786]: I0127 00:07:20.914398 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:20Z","lastTransitionTime":"2026-01-27T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.021330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.021399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.021432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.021460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.021478 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.124897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.125024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.125051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.125080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.125101 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.146976 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:21 crc kubenswrapper[4786]: E0127 00:07:21.147205 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.146980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.147294 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:26:20.12935815 +0000 UTC Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.147330 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:21 crc kubenswrapper[4786]: E0127 00:07:21.147425 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:21 crc kubenswrapper[4786]: E0127 00:07:21.147505 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.227763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.227811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.227827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.227850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.227866 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.331039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.331093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.331110 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.331137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.331162 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.434614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.434679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.434697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.434723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.434742 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.537157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.537291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.537309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.537333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.537351 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.639796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.639825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.639840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.639855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.639867 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.742720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.742776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.742794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.742816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.742834 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.845294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.845344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.845360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.845383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.845399 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.948154 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.948194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.948204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.948220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:21 crc kubenswrapper[4786]: I0127 00:07:21.948231 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:21Z","lastTransitionTime":"2026-01-27T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.050653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.050719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.050742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.050768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.050789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.147085 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.147256 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.147391 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:24:07.81959789 +0000 UTC Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.153339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.153394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.153411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.153439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.153456 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.255970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.256019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.256037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.256059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.256078 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.358718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.358768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.358786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.358807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.358826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.461771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.461822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.461838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.461860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.461876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.565157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.565227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.565251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.565280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.565300 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.707721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.707790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.707814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.707845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.707872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.811333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.811401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.811420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.811445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.811463 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.854075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.854146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.854169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.854202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.854221 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.877301 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.882726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.882778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.882796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.882850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.882870 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.902777 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.907908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.907977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.908000 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.908028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.908049 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.932764 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.938615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.938692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.938710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.938734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.938751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.959661 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.964873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.964933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.964954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.964979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.965001 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.987670 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:22 crc kubenswrapper[4786]: E0127 00:07:22.987896 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.990790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.990830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.990847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.990908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:22 crc kubenswrapper[4786]: I0127 00:07:22.990928 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:22Z","lastTransitionTime":"2026-01-27T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.093349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.093395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.093411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.093434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.093451 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.147275 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:23 crc kubenswrapper[4786]: E0127 00:07:23.147503 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.147810 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:04:52.364026445 +0000 UTC Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.147883 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.147947 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:23 crc kubenswrapper[4786]: E0127 00:07:23.148157 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:23 crc kubenswrapper[4786]: E0127 00:07:23.148274 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.196229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.196298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.196321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.196349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.196373 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.299459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.299525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.299543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.299606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.299624 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.403035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.403142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.403170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.403199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.403225 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.506314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.506418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.506436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.506460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.506482 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.609754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.609808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.609828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.609850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.609867 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.713878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.713950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.713976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.714004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.714025 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.819020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.819094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.819122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.819146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.819163 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.922626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.922687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.922708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.922731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:23 crc kubenswrapper[4786]: I0127 00:07:23.922751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:23Z","lastTransitionTime":"2026-01-27T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.026060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.026118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.026135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.026159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.026178 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.128970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.129030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.129046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.129077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.129099 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.146647 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:24 crc kubenswrapper[4786]: E0127 00:07:24.146811 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.148958 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:21:25.429735748 +0000 UTC Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.231975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.232047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.232064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.232089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.232106 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.335466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.335535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.335561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.335655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.335673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.438908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.439001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.439019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.439043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.439061 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.543006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.543108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.543127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.543149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.543172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.646156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.646231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.646249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.646273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.646290 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.750133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.750205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.750228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.750254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.750272 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.853512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.853686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.853713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.853745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.853769 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.957521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.957603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.957617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.957640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:24 crc kubenswrapper[4786]: I0127 00:07:24.957655 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:24Z","lastTransitionTime":"2026-01-27T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.061207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.061278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.061296 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.061322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.061341 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.147093 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.147104 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:25 crc kubenswrapper[4786]: E0127 00:07:25.147351 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.147133 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:25 crc kubenswrapper[4786]: E0127 00:07:25.147467 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:25 crc kubenswrapper[4786]: E0127 00:07:25.147675 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.149740 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:03:25.973679562 +0000 UTC Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.167346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.167404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.167422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.167449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.167468 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.171036 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.173024 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.187303 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.205219 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.227560 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.261706 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:13Z\\\",\\\"message\\\":\\\") from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 00:07:13.967026 6812 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:07:13.967140 6812 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.967170 6812 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.968094 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:07:13.968187 6812 factory.go:656] Stopping watch factory\\\\nI0127 00:07:13.968216 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:07:13.976666 6812 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 00:07:13.976712 6812 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 00:07:13.976810 6812 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:07:13.976855 6812 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:07:13.976959 6812 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.269545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.269584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.269592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.269607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.269616 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.279886 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.298387 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.316004 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.332446 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.345621 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.363480 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.373293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.373371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.373407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.373441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.373482 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.380967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.404281 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.424991 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.441271 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.459151 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.476395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.476430 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.476444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.476464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.476478 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.479306 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.500746 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.579884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.579949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.579970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.579994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.580011 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.682755 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.682823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.682848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.682905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.682929 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.786269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.786330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.786347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.786370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.786388 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.889647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.889738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.889755 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.889779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.889796 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.992343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.992407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.992420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.992439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:25 crc kubenswrapper[4786]: I0127 00:07:25.992475 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:25Z","lastTransitionTime":"2026-01-27T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.095486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.095547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.095598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.095630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.095652 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.146748 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:26 crc kubenswrapper[4786]: E0127 00:07:26.147069 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.150779 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:36:30.85845861 +0000 UTC Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.198164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.198246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.198278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.198307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.198328 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.301907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.301968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.301986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.302010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.302027 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.405669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.405739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.405764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.405794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.405820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.508889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.508994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.509028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.509061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.509087 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.612521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.612616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.612635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.612660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.612680 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.716263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.716346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.716372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.716398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.716416 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.819735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.819795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.819813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.819837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.819855 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.922124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.922177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.922190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.922206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:26 crc kubenswrapper[4786]: I0127 00:07:26.922220 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:26Z","lastTransitionTime":"2026-01-27T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.024649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.024700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.024712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.024729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.024741 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.128003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.128048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.128062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.128084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.128100 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.146713 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.146778 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.146718 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:27 crc kubenswrapper[4786]: E0127 00:07:27.146944 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:27 crc kubenswrapper[4786]: E0127 00:07:27.147046 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:27 crc kubenswrapper[4786]: E0127 00:07:27.147177 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.150912 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:23:18.2832379 +0000 UTC Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.230282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.230339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.230357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.230380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.230399 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.333742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.333811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.333828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.333852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.333870 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.436763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.436842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.436869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.436900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.436929 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.539757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.539804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.539821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.539843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.539860 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.648281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.648340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.648358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.648383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.648400 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.751357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.751779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.752338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.752387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.752436 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.855470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.855517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.855528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.855547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.855560 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.958372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.958426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.958442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.958465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:27 crc kubenswrapper[4786]: I0127 00:07:27.958483 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:27Z","lastTransitionTime":"2026-01-27T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.061914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.061953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.061963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.061977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.061986 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.146905 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:28 crc kubenswrapper[4786]: E0127 00:07:28.147024 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.152078 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:16:43.825822408 +0000 UTC Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.164842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.164893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.164914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.164940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.164957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.267588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.267617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.267625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.267637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.267647 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.370237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.370290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.370307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.370330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.370346 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.473619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.473727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.473749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.473808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.473826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.576962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.577051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.577074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.577104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.577126 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.680115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.680178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.680197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.680225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.680242 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.783768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.783832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.783849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.783885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.783905 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.886545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.886639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.886657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.886681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.886698 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.989013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.989073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.989093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.989117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:28 crc kubenswrapper[4786]: I0127 00:07:28.989136 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:28Z","lastTransitionTime":"2026-01-27T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.092417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.092487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.092504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.092528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.092545 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.147517 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.147680 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.147538 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:29 crc kubenswrapper[4786]: E0127 00:07:29.147802 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:29 crc kubenswrapper[4786]: E0127 00:07:29.148174 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:29 crc kubenswrapper[4786]: E0127 00:07:29.148034 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.152209 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:23:22.309312314 +0000 UTC Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.195356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.195414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.195427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.195447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.195464 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.298726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.298824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.298860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.298893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.298915 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.401812 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.401885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.401909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.401941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.401964 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.504650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.504726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.504750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.504778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.504800 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.607792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.607854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.607873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.607898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.607919 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.710456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.710525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.710546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.710598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.710618 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.813860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.813897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.813909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.813928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.813942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.916508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.916558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.916612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.916637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:29 crc kubenswrapper[4786]: I0127 00:07:29.916658 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:29Z","lastTransitionTime":"2026-01-27T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.019871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.019904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.019915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.019931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.019941 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.122274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.122338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.122357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.122382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.122401 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.147127 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:30 crc kubenswrapper[4786]: E0127 00:07:30.147288 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.148034 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:07:30 crc kubenswrapper[4786]: E0127 00:07:30.148230 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.153311 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:44:26.296503614 +0000 UTC Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.225342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.225404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.225424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.225449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.225466 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.328324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.328419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.328436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.328461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.328478 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.431861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.431932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.431949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.431974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.431993 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.534311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.534375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.534392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.534414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.534431 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.636735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.636811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.636906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.636940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.636962 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.739486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.739526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.739540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.739558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.739576 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.843473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.843543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.843564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.843598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.843653 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.946216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.946271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.946291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.946318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:30 crc kubenswrapper[4786]: I0127 00:07:30.946340 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:30Z","lastTransitionTime":"2026-01-27T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.049970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.050393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.050411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.050447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.050483 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.147102 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.147103 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:31 crc kubenswrapper[4786]: E0127 00:07:31.147336 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.147150 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:31 crc kubenswrapper[4786]: E0127 00:07:31.147414 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:31 crc kubenswrapper[4786]: E0127 00:07:31.147612 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.153462 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:35:05.591734833 +0000 UTC Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.256332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.256406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.256420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.256439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.256450 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.360139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.360223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.360251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.360280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.360302 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.463558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.463633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.463647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.463668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.463683 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.566018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.566061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.566073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.566091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.566102 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.668760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.668805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.668814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.668827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.668837 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.770643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.770713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.770734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.770762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.770786 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.873643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.873686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.873698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.873714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.873725 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.976835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.976883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.976896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.976915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:31 crc kubenswrapper[4786]: I0127 00:07:31.976929 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:31Z","lastTransitionTime":"2026-01-27T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.079573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.079698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.079718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.079743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.079767 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.146943 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:32 crc kubenswrapper[4786]: E0127 00:07:32.147134 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.154086 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:04:26.414029099 +0000 UTC Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.182383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.182460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.182484 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.182511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.182534 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.286306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.286359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.286372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.286390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.286402 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.388922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.388973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.388987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.389003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.389014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.491133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.491194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.491211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.491237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.491257 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.594231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.594330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.594357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.594398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.594425 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.697071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.697125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.697142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.697165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.697183 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.799559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.799970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.799997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.800021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.800037 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.903501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.903616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.903637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.903660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:32 crc kubenswrapper[4786]: I0127 00:07:32.903677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:32Z","lastTransitionTime":"2026-01-27T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.006795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.006843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.006859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.006883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.006901 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.077741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.077799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.077816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.077838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.077855 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.097896 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.103329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.103391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.103410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.103435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.103453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.123637 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.128760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.128822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.128842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.128866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.128928 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.146929 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.147101 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.146937 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.147362 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.148070 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.148446 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.153736 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.154500 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:18:31.938724533 +0000 UTC Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.160615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.160740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.160760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.160783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.160803 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.177892 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.183052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.183115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.183137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.183160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.183181 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.203846 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c3970d11-79bc-4b17-85f1-58a9025c1bc5\\\",\\\"systemUUID\\\":\\\"0f9e3400-2828-40d4-9904-504379bf40a5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:33 crc kubenswrapper[4786]: E0127 00:07:33.204077 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.206589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.206694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.206719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.206745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.206763 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.309734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.309795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.309815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.309842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.309864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.412672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.412739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.412762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.412790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.412812 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.516438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.516634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.516655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.516722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.516744 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.620098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.620178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.620203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.620231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.620250 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.723207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.723280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.723317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.723351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.723374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.826102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.826173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.826192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.826219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.826391 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.929071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.929131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.929167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.929195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:33 crc kubenswrapper[4786]: I0127 00:07:33.929218 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:33Z","lastTransitionTime":"2026-01-27T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.032435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.032502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.032523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.032551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.032606 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.135775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.135856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.135879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.135907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.135930 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.147598 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:34 crc kubenswrapper[4786]: E0127 00:07:34.147801 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.155976 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:10:24.284589976 +0000 UTC Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.239445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.239504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.239523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.239547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.239596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.342369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.342414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.342431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.342452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.342468 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.445441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.445506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.445528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.445553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.445603 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.550725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.550801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.550829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.550864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.550890 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.654609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.654687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.654705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.654731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.654750 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.758770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.758818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.758833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.758859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.758872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.862371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.862432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.862450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.862474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.862491 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.965471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.965523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.965537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.965559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:34 crc kubenswrapper[4786]: I0127 00:07:34.965595 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:34Z","lastTransitionTime":"2026-01-27T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.068502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.068556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.068648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.068682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.068707 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.099742 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:35 crc kubenswrapper[4786]: E0127 00:07:35.099924 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:07:35 crc kubenswrapper[4786]: E0127 00:07:35.099989 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs podName:be80aa92-329a-4f72-9dbb-b717f533fffb nodeName:}" failed. No retries permitted until 2026-01-27 00:08:39.099972915 +0000 UTC m=+164.583659968 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs") pod "network-metrics-daemon-9czjg" (UID: "be80aa92-329a-4f72-9dbb-b717f533fffb") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.146943 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.147036 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:35 crc kubenswrapper[4786]: E0127 00:07:35.147146 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.147186 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:35 crc kubenswrapper[4786]: E0127 00:07:35.147349 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:35 crc kubenswrapper[4786]: E0127 00:07:35.147511 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.156442 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:20:58.434822332 +0000 UTC Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.171755 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.171804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.171815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.171835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.171848 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.172757 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bf39ee3-976b-4ecb-b200-1d4a790b67ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aea81cc38a7a67db407051a09afb65183ec5acef8a386b73f4e4129797f805f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d19319e4e494c96ef595de30b1cc0fde1e905b7a55d2531a31065f190fe73c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c3118034f71ca0398ed5fc68620d75645c210f7b92f099280bbce4d9b00fce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e5e22b3c4f1289f4e15f1bf515247c52cf7684e9ccf18b871282c5ea60590f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://890d33749e8d2c80dcfaffdb07580bf84210fea99361505762bb38dfcc33f0a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3f7c8e15c0e2a57818f4613a49e7466ed4ce18d3cb5e8a1182b3c0ccca6483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de3ac6594156a066a46b8ecbcea50642c5e4deb691016552d29fe0d16c92b3e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ntg2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.203755 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"629f8cf2-3b6f-404b-814f-1e613f80e63e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:13Z\\\",\\\"message\\\":\\\") from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 00:07:13.967026 6812 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 00:07:13.967140 6812 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.967170 6812 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 00:07:13.968094 6812 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 00:07:13.968187 6812 factory.go:656] Stopping watch factory\\\\nI0127 00:07:13.968216 6812 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 00:07:13.976666 6812 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 00:07:13.976712 6812 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 00:07:13.976810 6812 ovnkube.go:599] Stopped ovnkube\\\\nI0127 00:07:13.976855 6812 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 00:07:13.976959 6812 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fqh9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.223363 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.238950 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xs757" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34352a54-9248-486d-9020-721918f77f3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff696afd0a8a005845a8d743faa024afc63836ec3d4b5b0dafb58421851a4121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwg8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xs757\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.256123 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpkpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cfc6ef0-977d-4767-bb4b-ba841a34acb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67d70cfe10dfed3174106d771684f89de770c3072d10372483d065336c3c734f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j288p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpkpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.271379 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba89bd8e9fc7e44e41adf96e797f6ebc34379e26216c0647aa027f3ac97856e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mjgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-87nzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.274420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.274489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.274507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.274533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.274549 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.285409 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-phvd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d790bab-fb2b-4745-a195-65359a962f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T00:07:04Z\\\",\\\"message\\\":\\\"2026-01-27T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54\\\\n2026-01-27T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d90f311a-cbab-4680-b094-55bf7993bf54 to /host/opt/cni/bin/\\\\n2026-01-27T00:06:19Z [verbose] multus-daemon started\\\\n2026-01-27T00:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-27T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-phvd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.300728 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9czjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be80aa92-329a-4f72-9dbb-b717f533fffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xlfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9czjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.311553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4814b87f-28ae-4b5d-99c9-9abc9bb08d12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b57a7a79515a445dac3135d999e35cb9189ae8220163c2675776c7c3fca29e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d32d3420ca085c92137edd91b11f766d07c7250a84c0bfb56cb23c3fbf8a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a15646ce2dfb4521aec7755246c4db758672eab42c4189c8e36279d5f8e5a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdc6ce3b8c6931735670b95f0e2fdb46d0ab65ea7f72485e9ab7aeae6b64bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.336897 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705c9488-7d54-47ae-a67f-a9e7ad0c5186\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1b799985cf2ce342fb6963e06c9223736c131379415b2082bb0e8e86d00a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe2cc999d105b76aa0cbfcbfa461d3436afc957b3a5c7f154932428ce0404d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afa36654001a9c461b8ac98fce3ad6c3eb16c6bf3f8154c8bedfb0702f6646d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5afa0fb1d71f298389fc160f89f4c8b92090a0fd54f994513884a04c5e09f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f96cb72b6836a1238b6e49b71238662af99342805c43050a01a342abb6ab25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c3c0b1e213899a73a2a030eaeb4eed5065cb76d5cd5e191a48c857f447c60f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3c0b1e213899a73a2a030eaeb4eed5065cb76d5cd5e191a48c857f447c60f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43a919593dac7a793cf7aabf9c6586842d9621129b6a17b8a98a56e61b72f48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43a919593dac7a793cf7aabf9c6586842d9621129b6a17b8a98a56e61b72f48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1740c0b205f591af52b4983e5902a52e3b9ae62878c2f198e8a4190e96347e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1740c0b205f591af52b4983e5902a52e3b9ae62878c2f198e8a4190e96347e5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.349156 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.367515 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.376981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.377032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.377053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.377076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.377093 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.382713 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60c33092932e40c94150a785c6545d774e8052b5c531548cbb0e283c7702101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.396037 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617f5804-7f6b-44cd-9f6b-2cffbd175ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5a62dcd1241c205ba2e954925745c7e8b656bba06711d9c75f7b21b5cca3938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2df5fa9ca7fe72c057ae5a7fce0b9734e54703bac95417f11d3fbbf296adf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:06:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv5fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.414057 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb223fe-4f5f-42b9-8130-f7c4f0ce27b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfb57de1e5bf761a9733d1a8f67d6151954a525111f6a3a7cbf59b615398a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b617fb009cea9e82dfc7d17dd1fc3a82c37bc747340e0c4be2b79aca50d513a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe88ba155393c257580e23a64910ff20ee2c7cb1fd2e1d9855d7b8126eb4074\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.429185 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3644e20d-50a7-434b-87f8-01c86c47af1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ec89ee8c69c65f68be99e976c6c84c2c8b775a85316cef276cd331b1f209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59312fdb484503f7da425e7f6aa2c09689ae291111863445e52a1550cd64d77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.444005 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c04e7fb1-392d-4713-82ae-b82f94f1fc50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T00:06:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 00:06:08.828487 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 00:06:08.830749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2209123273/tls.crt::/tmp/serving-cert-2209123273/tls.key\\\\\\\"\\\\nI0127 00:06:14.885892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 00:06:14.896619 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 00:06:14.896642 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 00:06:14.896695 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 00:06:14.896700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 00:06:14.901908 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 00:06:14.902031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 00:06:14.902048 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 00:06:14.902060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 00:06:14.902065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 00:06:14.902069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 00:06:14.901964 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 00:06:14.904564 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T00:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T00:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T00:05:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.460039 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fad54401c6254e30ba8df8fb9bd2c286e98e272b239775aaf67975faefb0bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.476127 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T00:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdc00173b39cac3b7d2c7edb265d2167fd760210e7fda0c4fc7c5223913b661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b564457d9941a9de03dd9af15fc001698ccbe664a919bf99865a497e444bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T00:07:35Z is after 2025-08-24T17:21:41Z" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.479933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.480007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.480030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.480064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.480089 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.583462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.583554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.583618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.583650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.583673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.686612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.686668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.686689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.686715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.686734 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.789152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.789191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.789200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.789213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.789224 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.892082 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.892125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.892139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.892159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.892175 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.995509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.995613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.995632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.995656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:35 crc kubenswrapper[4786]: I0127 00:07:35.995675 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:35Z","lastTransitionTime":"2026-01-27T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.098533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.098589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.098601 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.098617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.098628 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.147141 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:36 crc kubenswrapper[4786]: E0127 00:07:36.147291 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.157662 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:36:50.868630343 +0000 UTC Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.201353 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.201393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.201404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.201419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.201429 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.304502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.304558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.304618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.304642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.304660 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.407465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.407531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.407552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.407616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.407640 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.511415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.511483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.511510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.511543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.511564 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.615151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.615250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.615306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.615332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.615350 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.717800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.717872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.717897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.717927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.717949 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.821511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.821615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.821643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.821671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.821730 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.924365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.924425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.924442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.924467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:36 crc kubenswrapper[4786]: I0127 00:07:36.924485 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:36Z","lastTransitionTime":"2026-01-27T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.027601 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.027666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.027690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.027717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.027738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.131683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.131772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.131792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.131845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.131862 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.146911 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.147013 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.147052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:37 crc kubenswrapper[4786]: E0127 00:07:37.147223 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:37 crc kubenswrapper[4786]: E0127 00:07:37.147380 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:37 crc kubenswrapper[4786]: E0127 00:07:37.147623 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.158700 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:16:26.475595794 +0000 UTC Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.234286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.234365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.234392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.234427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.234449 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.336937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.337033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.337052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.337077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.337095 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.440393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.440465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.440491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.440520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.440543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.543860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.543916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.543933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.543958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.543991 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.647318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.647374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.647390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.647416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.647433 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.750281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.750333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.750346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.750363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.750375 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.852822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.852945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.853016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.853047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.853073 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.956332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.956374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.956386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.956425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:37 crc kubenswrapper[4786]: I0127 00:07:37.956439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:37Z","lastTransitionTime":"2026-01-27T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.059380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.059458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.059474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.059492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.059505 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.147039 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:38 crc kubenswrapper[4786]: E0127 00:07:38.147248 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.159437 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:37:03.834765394 +0000 UTC Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.162823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.162865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.162881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.162903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.162921 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.270129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.270191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.270208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.270232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.270249 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.372855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.372923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.372940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.372964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.372981 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.476124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.476196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.476218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.476248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.476267 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.579220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.579289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.579313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.579344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.579371 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.682495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.682670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.682700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.682723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.682739 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.785231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.785351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.785374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.785410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.785437 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.889263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.889338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.889358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.889382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.889402 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.992907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.993010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.993035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.993070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:38 crc kubenswrapper[4786]: I0127 00:07:38.993094 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:38Z","lastTransitionTime":"2026-01-27T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.096751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.096833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.096852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.096885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.096903 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.146808 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.146888 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.146832 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:39 crc kubenswrapper[4786]: E0127 00:07:39.146989 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:39 crc kubenswrapper[4786]: E0127 00:07:39.147107 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:39 crc kubenswrapper[4786]: E0127 00:07:39.147229 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.160281 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:35:42.503089794 +0000 UTC Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.200481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.200554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.200612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.200645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.200669 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.303556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.303660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.303685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.303714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.303737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.406701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.406771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.406796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.406824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.406887 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.510410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.510491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.510509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.510537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.510558 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.614605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.614689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.614711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.614746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.614772 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.718914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.718976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.718990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.719016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.719035 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.822290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.822347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.822362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.822383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.822398 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.924336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.924425 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.924452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.924484 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:39 crc kubenswrapper[4786]: I0127 00:07:39.924509 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:39Z","lastTransitionTime":"2026-01-27T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.028061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.028203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.028224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.028248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.028266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.131136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.131172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.131182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.131197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.131209 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.147246 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:40 crc kubenswrapper[4786]: E0127 00:07:40.147381 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.161255 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:16:23.525791452 +0000 UTC Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.234904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.235068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.235105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.235134 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.235157 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.338560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.338658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.338692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.338765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.338791 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.442291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.442431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.442518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.442630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.442659 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.546612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.546679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.546700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.546726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.546744 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.650770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.650838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.650849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.650871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.650895 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.754227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.754289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.754307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.754333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.754350 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.863040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.863116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.863157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.863190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.863216 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.965957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.965989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.965997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.966009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:40 crc kubenswrapper[4786]: I0127 00:07:40.966017 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:40Z","lastTransitionTime":"2026-01-27T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.068472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.068522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.068534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.068550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.068562 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.146928 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.147016 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:41 crc kubenswrapper[4786]: E0127 00:07:41.147550 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.147155 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:41 crc kubenswrapper[4786]: E0127 00:07:41.147708 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:41 crc kubenswrapper[4786]: E0127 00:07:41.148031 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.161736 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:03:18.610231247 +0000 UTC Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.171336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.171417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.171444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.171493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.171524 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.274622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.274838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.274887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.274910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.274927 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.377428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.377470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.377480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.377496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.377507 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.480207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.480258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.480270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.480291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.480305 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.583852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.583911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.583924 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.583951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.583979 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.687359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.687403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.687414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.687435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.687447 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.790433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.790501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.790522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.790545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.790559 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.893445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.893501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.893523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.893552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.893608 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.996168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.996217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.996231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.996248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:41 crc kubenswrapper[4786]: I0127 00:07:41.996261 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:41Z","lastTransitionTime":"2026-01-27T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.098962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.099001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.099013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.099030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.099043 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.146802 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:42 crc kubenswrapper[4786]: E0127 00:07:42.146980 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.162719 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:14:10.220158359 +0000 UTC Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.200980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.201023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.201039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.201061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.201074 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.304197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.304253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.304273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.304301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.304321 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.407377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.407424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.407438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.407456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.407468 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.510206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.510297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.510317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.510342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.510359 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.612440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.612480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.612491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.612507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.612519 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.715716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.715777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.715792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.715808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.715819 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.818784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.818858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.818884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.818914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.818938 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.921399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.921451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.921469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.921493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:42 crc kubenswrapper[4786]: I0127 00:07:42.921511 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:42Z","lastTransitionTime":"2026-01-27T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.024662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.024747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.024774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.024805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.024831 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:43Z","lastTransitionTime":"2026-01-27T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.127187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.127240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.127252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.127272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.127285 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:43Z","lastTransitionTime":"2026-01-27T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.146550 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.146637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:43 crc kubenswrapper[4786]: E0127 00:07:43.146741 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.146855 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:43 crc kubenswrapper[4786]: E0127 00:07:43.147035 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:43 crc kubenswrapper[4786]: E0127 00:07:43.147159 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.162927 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:22:55.667872056 +0000 UTC Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.229916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.229956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.229969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.229987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.229998 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:43Z","lastTransitionTime":"2026-01-27T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.302932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.302971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.302983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.303010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.303022 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T00:07:43Z","lastTransitionTime":"2026-01-27T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.367456 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm"] Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.367989 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.370996 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.371011 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.371063 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.371241 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.391316 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46277e18-9014-460b-93d0-ed019e5ddd2d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.391462 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.391517 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/46277e18-9014-460b-93d0-ed019e5ddd2d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.391544 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.391620 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46277e18-9014-460b-93d0-ed019e5ddd2d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.392281 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.392267879 podStartE2EDuration="1m1.392267879s" podCreationTimestamp="2026-01-27 00:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.392147556 +0000 UTC m=+108.875834609" watchObservedRunningTime="2026-01-27 00:07:43.392267879 +0000 UTC m=+108.875954932" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.419870 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.419849368 podStartE2EDuration="18.419849368s" podCreationTimestamp="2026-01-27 00:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.419657822 +0000 UTC m=+108.903344875" watchObservedRunningTime="2026-01-27 00:07:43.419849368 +0000 UTC m=+108.903536421" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.463981 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-phvd5" podStartSLOduration=87.463960059 podStartE2EDuration="1m27.463960059s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.462752294 +0000 UTC m=+108.946439367" watchObservedRunningTime="2026-01-27 00:07:43.463960059 +0000 UTC m=+108.947647112" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492452 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/46277e18-9014-460b-93d0-ed019e5ddd2d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492478 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492523 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46277e18-9014-460b-93d0-ed019e5ddd2d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492547 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46277e18-9014-460b-93d0-ed019e5ddd2d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492808 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.492820 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/46277e18-9014-460b-93d0-ed019e5ddd2d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.493534 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/46277e18-9014-460b-93d0-ed019e5ddd2d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.497465 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.497440876 podStartE2EDuration="1m24.497440876s" podCreationTimestamp="2026-01-27 00:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.487999746 +0000 UTC m=+108.971686789" watchObservedRunningTime="2026-01-27 00:07:43.497440876 +0000 UTC m=+108.981127939" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.497892 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.497883748 podStartE2EDuration="32.497883748s" podCreationTimestamp="2026-01-27 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.497834277 +0000 UTC m=+108.981521320" watchObservedRunningTime="2026-01-27 00:07:43.497883748 +0000 UTC m=+108.981570811" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.501644 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46277e18-9014-460b-93d0-ed019e5ddd2d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.511121 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46277e18-9014-460b-93d0-ed019e5ddd2d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xmvdm\" (UID: \"46277e18-9014-460b-93d0-ed019e5ddd2d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.512708 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.512691902 podStartE2EDuration="1m28.512691902s" podCreationTimestamp="2026-01-27 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.511429316 +0000 UTC m=+108.995116349" watchObservedRunningTime="2026-01-27 00:07:43.512691902 +0000 UTC m=+108.996378945" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.554211 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv5fw" podStartSLOduration=87.554188018 podStartE2EDuration="1m27.554188018s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.55355594 +0000 UTC m=+109.037242993" watchObservedRunningTime="2026-01-27 00:07:43.554188018 +0000 UTC m=+109.037875081" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.640959 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jpkpx" podStartSLOduration=87.640945708 podStartE2EDuration="1m27.640945708s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.640909307 +0000 UTC m=+109.124596350" watchObservedRunningTime="2026-01-27 00:07:43.640945708 +0000 UTC m=+109.124632751" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.641492 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xs757" podStartSLOduration=87.641489384 podStartE2EDuration="1m27.641489384s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.630313084 +0000 UTC m=+109.114000137" watchObservedRunningTime="2026-01-27 00:07:43.641489384 +0000 UTC m=+109.125176417" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.652864 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podStartSLOduration=87.652848089 podStartE2EDuration="1m27.652848089s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.65220605 +0000 UTC m=+109.135893083" watchObservedRunningTime="2026-01-27 00:07:43.652848089 +0000 UTC m=+109.136535132" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.669673 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ntg2b" podStartSLOduration=87.669650499 podStartE2EDuration="1m27.669650499s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.666462578 +0000 UTC m=+109.150149631" watchObservedRunningTime="2026-01-27 00:07:43.669650499 +0000 UTC m=+109.153337542" Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.685826 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" Jan 27 00:07:43 crc kubenswrapper[4786]: W0127 00:07:43.704630 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46277e18_9014_460b_93d0_ed019e5ddd2d.slice/crio-b4152d60c9c94a7d8f7c9a567aa140ce085dc2cd4fbedd0933901bb3324111ed WatchSource:0}: Error finding container b4152d60c9c94a7d8f7c9a567aa140ce085dc2cd4fbedd0933901bb3324111ed: Status 404 returned error can't find the container with id b4152d60c9c94a7d8f7c9a567aa140ce085dc2cd4fbedd0933901bb3324111ed Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.797123 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" event={"ID":"46277e18-9014-460b-93d0-ed019e5ddd2d","Type":"ContainerStarted","Data":"ae4a7161820b9d4f593905d5114f26ddcb96fd9be1b0b354e7a7c4b362951403"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.797181 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" event={"ID":"46277e18-9014-460b-93d0-ed019e5ddd2d","Type":"ContainerStarted","Data":"b4152d60c9c94a7d8f7c9a567aa140ce085dc2cd4fbedd0933901bb3324111ed"} Jan 27 00:07:43 crc kubenswrapper[4786]: I0127 00:07:43.817125 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xmvdm" podStartSLOduration=87.817109245 podStartE2EDuration="1m27.817109245s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:43.816559079 +0000 UTC m=+109.300246162" watchObservedRunningTime="2026-01-27 00:07:43.817109245 +0000 UTC m=+109.300796278" Jan 27 00:07:44 crc kubenswrapper[4786]: I0127 00:07:44.146659 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:44 crc kubenswrapper[4786]: E0127 00:07:44.146769 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:44 crc kubenswrapper[4786]: I0127 00:07:44.163113 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:16:55.562881251 +0000 UTC Jan 27 00:07:44 crc kubenswrapper[4786]: I0127 00:07:44.163750 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 00:07:44 crc kubenswrapper[4786]: I0127 00:07:44.175947 4786 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 00:07:45 crc kubenswrapper[4786]: I0127 00:07:45.147280 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:45 crc kubenswrapper[4786]: I0127 00:07:45.147357 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:45 crc kubenswrapper[4786]: I0127 00:07:45.147314 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:45 crc kubenswrapper[4786]: E0127 00:07:45.148450 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:45 crc kubenswrapper[4786]: E0127 00:07:45.149163 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:45 crc kubenswrapper[4786]: E0127 00:07:45.149789 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:45 crc kubenswrapper[4786]: I0127 00:07:45.150962 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:07:45 crc kubenswrapper[4786]: E0127 00:07:45.151202 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fqh9p_openshift-ovn-kubernetes(629f8cf2-3b6f-404b-814f-1e613f80e63e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" Jan 27 00:07:46 crc kubenswrapper[4786]: I0127 00:07:46.147340 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:46 crc kubenswrapper[4786]: E0127 00:07:46.147768 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:47 crc kubenswrapper[4786]: I0127 00:07:47.147293 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:47 crc kubenswrapper[4786]: I0127 00:07:47.147365 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:47 crc kubenswrapper[4786]: E0127 00:07:47.147457 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:47 crc kubenswrapper[4786]: I0127 00:07:47.147311 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:47 crc kubenswrapper[4786]: E0127 00:07:47.147619 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:47 crc kubenswrapper[4786]: E0127 00:07:47.147750 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:48 crc kubenswrapper[4786]: I0127 00:07:48.146464 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:48 crc kubenswrapper[4786]: E0127 00:07:48.147052 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:49 crc kubenswrapper[4786]: I0127 00:07:49.147399 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:49 crc kubenswrapper[4786]: I0127 00:07:49.147472 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:49 crc kubenswrapper[4786]: E0127 00:07:49.147673 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:49 crc kubenswrapper[4786]: I0127 00:07:49.147729 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:49 crc kubenswrapper[4786]: E0127 00:07:49.147929 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:49 crc kubenswrapper[4786]: E0127 00:07:49.148097 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.146611 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:50 crc kubenswrapper[4786]: E0127 00:07:50.146812 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.824495 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/1.log" Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.825731 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/0.log" Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.825812 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d790bab-fb2b-4745-a195-65359a962f52" containerID="a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a" exitCode=1 Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.825859 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerDied","Data":"a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a"} Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.825909 4786 scope.go:117] "RemoveContainer" containerID="1404e1cc26e3a498da6d682f7e6c3d344a9b1b9990097535a7634ca72dc27e81" Jan 27 00:07:50 crc kubenswrapper[4786]: I0127 00:07:50.826841 4786 scope.go:117] "RemoveContainer" containerID="a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a" Jan 27 00:07:50 crc kubenswrapper[4786]: E0127 00:07:50.827141 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-phvd5_openshift-multus(8d790bab-fb2b-4745-a195-65359a962f52)\"" pod="openshift-multus/multus-phvd5" podUID="8d790bab-fb2b-4745-a195-65359a962f52" Jan 27 00:07:51 crc kubenswrapper[4786]: I0127 00:07:51.146988 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:51 crc kubenswrapper[4786]: I0127 00:07:51.147002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:51 crc kubenswrapper[4786]: I0127 00:07:51.147324 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:51 crc kubenswrapper[4786]: E0127 00:07:51.147387 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:51 crc kubenswrapper[4786]: E0127 00:07:51.147211 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:51 crc kubenswrapper[4786]: E0127 00:07:51.147676 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:51 crc kubenswrapper[4786]: I0127 00:07:51.832015 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/1.log" Jan 27 00:07:52 crc kubenswrapper[4786]: I0127 00:07:52.147408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:52 crc kubenswrapper[4786]: E0127 00:07:52.147912 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:53 crc kubenswrapper[4786]: I0127 00:07:53.146717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:53 crc kubenswrapper[4786]: E0127 00:07:53.147416 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:53 crc kubenswrapper[4786]: I0127 00:07:53.146949 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:53 crc kubenswrapper[4786]: E0127 00:07:53.147692 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:53 crc kubenswrapper[4786]: I0127 00:07:53.146775 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:53 crc kubenswrapper[4786]: E0127 00:07:53.147950 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:54 crc kubenswrapper[4786]: I0127 00:07:54.146444 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:54 crc kubenswrapper[4786]: E0127 00:07:54.146710 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:55 crc kubenswrapper[4786]: E0127 00:07:55.110806 4786 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 00:07:55 crc kubenswrapper[4786]: I0127 00:07:55.146675 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:55 crc kubenswrapper[4786]: I0127 00:07:55.146762 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:55 crc kubenswrapper[4786]: I0127 00:07:55.148871 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:55 crc kubenswrapper[4786]: E0127 00:07:55.148878 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:55 crc kubenswrapper[4786]: E0127 00:07:55.149008 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:55 crc kubenswrapper[4786]: E0127 00:07:55.149249 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:55 crc kubenswrapper[4786]: E0127 00:07:55.248027 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 00:07:56 crc kubenswrapper[4786]: I0127 00:07:56.147196 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:56 crc kubenswrapper[4786]: E0127 00:07:56.147402 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.147489 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.147538 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:57 crc kubenswrapper[4786]: E0127 00:07:57.147850 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.147885 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:57 crc kubenswrapper[4786]: E0127 00:07:57.148462 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:57 crc kubenswrapper[4786]: E0127 00:07:57.148632 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.149076 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.858266 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/3.log" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.862014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerStarted","Data":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.862765 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:07:57 crc kubenswrapper[4786]: I0127 00:07:57.891453 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podStartSLOduration=101.891429955 podStartE2EDuration="1m41.891429955s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:07:57.891015423 +0000 UTC m=+123.374702506" watchObservedRunningTime="2026-01-27 00:07:57.891429955 +0000 UTC m=+123.375116998" Jan 27 00:07:58 crc kubenswrapper[4786]: I0127 00:07:58.147180 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:07:58 crc kubenswrapper[4786]: E0127 00:07:58.147365 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:07:58 crc kubenswrapper[4786]: I0127 00:07:58.291921 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9czjg"] Jan 27 00:07:58 crc kubenswrapper[4786]: I0127 00:07:58.292068 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:07:58 crc kubenswrapper[4786]: E0127 00:07:58.292181 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:07:59 crc kubenswrapper[4786]: I0127 00:07:59.147357 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:07:59 crc kubenswrapper[4786]: E0127 00:07:59.147618 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:07:59 crc kubenswrapper[4786]: I0127 00:07:59.147941 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:07:59 crc kubenswrapper[4786]: E0127 00:07:59.148298 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:00 crc kubenswrapper[4786]: I0127 00:08:00.147395 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:00 crc kubenswrapper[4786]: I0127 00:08:00.147410 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:00 crc kubenswrapper[4786]: E0127 00:08:00.147631 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:00 crc kubenswrapper[4786]: E0127 00:08:00.147788 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:00 crc kubenswrapper[4786]: E0127 00:08:00.249266 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 00:08:01 crc kubenswrapper[4786]: I0127 00:08:01.146704 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:01 crc kubenswrapper[4786]: I0127 00:08:01.146806 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:01 crc kubenswrapper[4786]: E0127 00:08:01.146886 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:08:01 crc kubenswrapper[4786]: E0127 00:08:01.147018 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:02 crc kubenswrapper[4786]: I0127 00:08:02.147095 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:02 crc kubenswrapper[4786]: E0127 00:08:02.147843 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:02 crc kubenswrapper[4786]: I0127 00:08:02.147082 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:02 crc kubenswrapper[4786]: E0127 00:08:02.147989 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:02 crc kubenswrapper[4786]: I0127 00:08:02.990277 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:08:03 crc kubenswrapper[4786]: I0127 00:08:03.147927 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:03 crc kubenswrapper[4786]: I0127 00:08:03.148073 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:03 crc kubenswrapper[4786]: E0127 00:08:03.148269 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:08:03 crc kubenswrapper[4786]: E0127 00:08:03.148961 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:04 crc kubenswrapper[4786]: I0127 00:08:04.147088 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:04 crc kubenswrapper[4786]: I0127 00:08:04.147166 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:04 crc kubenswrapper[4786]: E0127 00:08:04.147311 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:04 crc kubenswrapper[4786]: E0127 00:08:04.147462 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:05 crc kubenswrapper[4786]: I0127 00:08:05.146875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:05 crc kubenswrapper[4786]: E0127 00:08:05.148809 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:08:05 crc kubenswrapper[4786]: I0127 00:08:05.148838 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:05 crc kubenswrapper[4786]: E0127 00:08:05.148963 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:05 crc kubenswrapper[4786]: E0127 00:08:05.250262 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 00:08:06 crc kubenswrapper[4786]: I0127 00:08:06.147424 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:06 crc kubenswrapper[4786]: I0127 00:08:06.147657 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:06 crc kubenswrapper[4786]: E0127 00:08:06.148761 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:06 crc kubenswrapper[4786]: I0127 00:08:06.147915 4786 scope.go:117] "RemoveContainer" containerID="a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a" Jan 27 00:08:06 crc kubenswrapper[4786]: E0127 00:08:06.148625 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:06 crc kubenswrapper[4786]: I0127 00:08:06.897825 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/1.log" Jan 27 00:08:06 crc kubenswrapper[4786]: I0127 00:08:06.897869 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerStarted","Data":"dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf"} Jan 27 00:08:07 crc kubenswrapper[4786]: I0127 00:08:07.147312 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:07 crc kubenswrapper[4786]: I0127 00:08:07.147328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:07 crc kubenswrapper[4786]: E0127 00:08:07.147525 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:07 crc kubenswrapper[4786]: E0127 00:08:07.147710 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:08:08 crc kubenswrapper[4786]: I0127 00:08:08.146819 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:08 crc kubenswrapper[4786]: E0127 00:08:08.147038 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:08 crc kubenswrapper[4786]: I0127 00:08:08.146855 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:08 crc kubenswrapper[4786]: E0127 00:08:08.147362 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:09 crc kubenswrapper[4786]: I0127 00:08:09.147263 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:09 crc kubenswrapper[4786]: I0127 00:08:09.147291 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:09 crc kubenswrapper[4786]: E0127 00:08:09.147657 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 00:08:09 crc kubenswrapper[4786]: E0127 00:08:09.147532 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 00:08:10 crc kubenswrapper[4786]: I0127 00:08:10.147378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:10 crc kubenswrapper[4786]: E0127 00:08:10.147518 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9czjg" podUID="be80aa92-329a-4f72-9dbb-b717f533fffb" Jan 27 00:08:10 crc kubenswrapper[4786]: I0127 00:08:10.147378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:10 crc kubenswrapper[4786]: E0127 00:08:10.147798 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.146479 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.146620 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.148871 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.150641 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.150653 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 00:08:11 crc kubenswrapper[4786]: I0127 00:08:11.150711 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 00:08:12 crc kubenswrapper[4786]: I0127 00:08:12.147246 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:12 crc kubenswrapper[4786]: I0127 00:08:12.147291 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:12 crc kubenswrapper[4786]: I0127 00:08:12.151234 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 00:08:12 crc kubenswrapper[4786]: I0127 00:08:12.151355 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.659387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.713197 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29491200-w2pbt"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.714152 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.718879 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.719507 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.720066 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.720644 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.722644 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tsvt8"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.723830 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.725143 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.725711 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.727992 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.728691 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.729665 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.730453 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.732233 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.733013 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.734731 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.735734 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.737115 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s2dp4"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.737674 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.805192 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.812966 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.817073 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.825723 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.825935 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.826470 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.826528 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.826584 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.827624 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.829252 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hvzq4"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.829759 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830091 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830154 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830259 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830326 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830376 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830476 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.830601 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.831088 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.831322 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.833081 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.834937 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5zhf8"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.835462 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.835926 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.836145 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.836303 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.837524 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.838466 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.838394 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.839060 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.839253 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.839736 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.839853 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.840255 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.842129 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.842216 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.842970 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.842989 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843059 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.842981 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843273 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843372 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843449 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843469 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843498 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843523 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843663 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843726 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843870 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.843976 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.844213 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.845604 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbbdx"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853285 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853333 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af973e-fec0-41b4-83eb-88d80e67a17b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853438 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-config\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853464 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-client\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853487 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-oauth-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853507 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-audit-policies\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtn47\" (UniqueName: \"kubernetes.io/projected/6e416439-768c-4148-a44a-2c1d155e32fb-kube-api-access-mtn47\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853649 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5l2m\" (UniqueName: \"kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853673 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-machine-approver-tls\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853694 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853712 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-oauth-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853730 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853754 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af973e-fec0-41b4-83eb-88d80e67a17b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853775 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmbz\" (UniqueName: \"kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-service-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853816 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-serving-cert\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853846 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853866 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5qf\" (UniqueName: \"kubernetes.io/projected/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-kube-api-access-hm5qf\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853891 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2x8\" (UniqueName: \"kubernetes.io/projected/a1af973e-fec0-41b4-83eb-88d80e67a17b-kube-api-access-9n2x8\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853914 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r79bs\" (UniqueName: \"kubernetes.io/projected/a68510e4-f5ff-42ec-bb5d-c0758a4de003-kube-api-access-r79bs\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853935 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-encryption-config\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853965 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-service-ca\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.853986 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qkb\" (UniqueName: \"kubernetes.io/projected/dcac63ca-ef6c-4848-8744-e29c2af9e390-kube-api-access-r6qkb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-trusted-ca-bundle\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854030 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e416439-768c-4148-a44a-2c1d155e32fb-audit-dir\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854050 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-auth-proxy-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854081 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcac63ca-ef6c-4848-8744-e29c2af9e390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854101 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854125 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a68510e4-f5ff-42ec-bb5d-c0758a4de003-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854149 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854169 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563a4e6d-5e85-4eb3-8078-69c0484d6b93-serving-cert\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854189 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l57\" (UniqueName: \"kubernetes.io/projected/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-kube-api-access-q4l57\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854208 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854227 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcac63ca-ef6c-4848-8744-e29c2af9e390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.854249 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2zm\" (UniqueName: \"kubernetes.io/projected/563a4e6d-5e85-4eb3-8078-69c0484d6b93-kube-api-access-vn2zm\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.873009 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.873176 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.874197 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.874469 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.875510 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.876560 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.876631 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.876946 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.876993 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.877091 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.877863 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.878116 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.878279 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.878384 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.878485 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.878612 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.880490 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.880931 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.881200 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.881488 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.882715 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.882863 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.882911 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883029 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883527 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883549 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883659 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883684 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883750 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.883849 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.884051 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.884073 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.884370 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.884858 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.884969 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.885074 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.885184 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.885291 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.885613 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.885677 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.886692 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.886694 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.886911 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.887834 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fmqkz"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.888470 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.888838 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7k2vr"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.889533 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.890031 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.890305 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.904243 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906153 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906203 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906359 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906420 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906510 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906623 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906696 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906764 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907014 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907438 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907528 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907624 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907708 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907782 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907900 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.907920 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29491200-w2pbt"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.906363 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.908024 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.908053 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.925886 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.929045 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.947118 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.948203 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhwf"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.948656 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.948906 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.950231 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.951472 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.951608 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.952270 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.952726 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954104 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954242 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tsvt8"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954733 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954756 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954776 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfh4\" (UniqueName: \"kubernetes.io/projected/d36adcbb-ecba-4efc-90cf-44471fcd3ce4-kube-api-access-zcfh4\") pod \"downloads-7954f5f757-hvzq4\" (UID: \"d36adcbb-ecba-4efc-90cf-44471fcd3ce4\") " pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954798 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af973e-fec0-41b4-83eb-88d80e67a17b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954816 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmbz\" (UniqueName: \"kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954833 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-service-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954847 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-serving-cert\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954866 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954881 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5qf\" (UniqueName: \"kubernetes.io/projected/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-kube-api-access-hm5qf\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954898 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954914 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954929 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954945 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d99a778d-271c-476f-a9d5-99b7e8445056-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954961 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2x8\" (UniqueName: \"kubernetes.io/projected/a1af973e-fec0-41b4-83eb-88d80e67a17b-kube-api-access-9n2x8\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954978 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r79bs\" (UniqueName: \"kubernetes.io/projected/a68510e4-f5ff-42ec-bb5d-c0758a4de003-kube-api-access-r79bs\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.954993 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-encryption-config\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955007 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6b2\" (UniqueName: \"kubernetes.io/projected/0c69ee0f-6cee-421a-a605-ec0b946a22c6-kube-api-access-jj6b2\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955030 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-service-ca\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qkb\" (UniqueName: \"kubernetes.io/projected/dcac63ca-ef6c-4848-8744-e29c2af9e390-kube-api-access-r6qkb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955077 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-trusted-ca-bundle\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e416439-768c-4148-a44a-2c1d155e32fb-audit-dir\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955107 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-auth-proxy-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955123 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-images\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955140 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a68510e4-f5ff-42ec-bb5d-c0758a4de003-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955156 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcac63ca-ef6c-4848-8744-e29c2af9e390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955175 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955197 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8dd718-bd04-4679-a651-816b958567b5-config\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955215 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8dd718-bd04-4679-a651-816b958567b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955232 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzrz\" (UniqueName: \"kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955251 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6k28\" (UniqueName: \"kubernetes.io/projected/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-kube-api-access-g6k28\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqzq\" (UniqueName: \"kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955283 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955285 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a65c06-00bf-475f-bf73-b09e13502a7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955448 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955550 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563a4e6d-5e85-4eb3-8078-69c0484d6b93-serving-cert\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955590 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955617 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l57\" (UniqueName: \"kubernetes.io/projected/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-kube-api-access-q4l57\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955639 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955659 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcac63ca-ef6c-4848-8744-e29c2af9e390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955680 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955700 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd6c6\" (UniqueName: \"kubernetes.io/projected/a1a65c06-00bf-475f-bf73-b09e13502a7c-kube-api-access-dd6c6\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955702 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955718 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c69ee0f-6cee-421a-a605-ec0b946a22c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955817 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955839 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955863 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2zm\" (UniqueName: \"kubernetes.io/projected/563a4e6d-5e85-4eb3-8078-69c0484d6b93-kube-api-access-vn2zm\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955898 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af973e-fec0-41b4-83eb-88d80e67a17b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955916 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955938 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d99a778d-271c-476f-a9d5-99b7e8445056-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956008 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-config\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956031 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8dd718-bd04-4679-a651-816b958567b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-client\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956067 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956090 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-config\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-serving-cert\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-oauth-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956201 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-audit-policies\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956222 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtn47\" (UniqueName: \"kubernetes.io/projected/6e416439-768c-4148-a44a-2c1d155e32fb-kube-api-access-mtn47\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956270 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956296 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5l2m\" (UniqueName: \"kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956315 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-machine-approver-tls\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956347 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25598\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-kube-api-access-25598\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956364 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956398 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956427 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1a65c06-00bf-475f-bf73-b09e13502a7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956447 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-trusted-ca\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-oauth-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956492 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956509 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-config\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956677 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-service-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.956771 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qc8x5"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.957196 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.957501 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.957604 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.957916 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.961458 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.961965 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.963137 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.963147 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.964110 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.964692 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.965104 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.965600 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.965643 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.965815 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.965977 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-serving-cert\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.966655 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-audit-policies\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.966930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.967376 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.967645 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbbdx"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.967670 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.955966 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af973e-fec0-41b4-83eb-88d80e67a17b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.971055 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563a4e6d-5e85-4eb3-8078-69c0484d6b93-config\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.971983 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.972074 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973064 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-oauth-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-trusted-ca-bundle\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973493 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973673 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973797 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e416439-768c-4148-a44a-2c1d155e32fb-audit-dir\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.973856 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e416439-768c-4148-a44a-2c1d155e32fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.974413 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-auth-proxy-config\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.974618 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.975245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcac63ca-ef6c-4848-8744-e29c2af9e390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.976175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-service-ca\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.976239 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.976930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcac63ca-ef6c-4848-8744-e29c2af9e390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.978276 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a68510e4-f5ff-42ec-bb5d-c0758a4de003-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.978588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-oauth-config\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.978975 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzmgs"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.979010 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af973e-fec0-41b4-83eb-88d80e67a17b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.980080 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.980666 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-machine-approver-tls\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.981003 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f96jt"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.981445 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.982745 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.983276 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.983879 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.984395 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.984889 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.985784 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.986383 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.986727 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.987298 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.987855 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.988313 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.988775 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw"] Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.989379 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.998155 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563a4e6d-5e85-4eb3-8078-69c0484d6b93-serving-cert\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.999191 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-etcd-client\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:13 crc kubenswrapper[4786]: I0127 00:08:13.999319 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5zhf8"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.000632 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.001644 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-console-serving-cert\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.001960 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.003534 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.004849 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.005759 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.006674 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.007066 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e416439-768c-4148-a44a-2c1d155e32fb-encryption-config\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.008962 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.010663 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.011917 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.012552 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.013841 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9f82m"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.014393 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.015315 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.016105 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4dzh4"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.016488 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.017400 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.018599 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gqmm8"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.019597 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.022798 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hvzq4"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.024465 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.026900 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.028457 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.029952 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.031129 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.032089 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s2dp4"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.033122 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.034140 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhwf"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.035081 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f96jt"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.036014 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.036444 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fmqkz"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.037968 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.039091 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.040080 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.041143 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4dzh4"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.042609 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.043745 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.044849 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.045922 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7k2vr"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.047056 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.048579 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9f82m"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.049660 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzmgs"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.050810 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.051841 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.052871 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gqmm8"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.053813 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.055117 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.055694 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.056051 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-q5rqx"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.056613 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057107 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8dd718-bd04-4679-a651-816b958567b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-config\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057154 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057175 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057191 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1f067e87-dade-4a5b-b69c-cc135c2b6894-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057209 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-serving-cert\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057234 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057357 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057461 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25598\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-kube-api-access-25598\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057532 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057582 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1a65c06-00bf-475f-bf73-b09e13502a7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057601 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-trusted-ca\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057662 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057681 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-config\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057699 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057722 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057747 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfh4\" (UniqueName: \"kubernetes.io/projected/d36adcbb-ecba-4efc-90cf-44471fcd3ce4-kube-api-access-zcfh4\") pod \"downloads-7954f5f757-hvzq4\" (UID: \"d36adcbb-ecba-4efc-90cf-44471fcd3ce4\") " pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057804 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057819 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d99a778d-271c-476f-a9d5-99b7e8445056-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057875 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899hz\" (UniqueName: \"kubernetes.io/projected/1f067e87-dade-4a5b-b69c-cc135c2b6894-kube-api-access-899hz\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057899 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adbddbdc-7896-452b-aa1e-e15fae2287ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057922 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6b2\" (UniqueName: \"kubernetes.io/projected/0c69ee0f-6cee-421a-a605-ec0b946a22c6-kube-api-access-jj6b2\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057962 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-images\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.057982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4f6d\" (UniqueName: \"kubernetes.io/projected/ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701-kube-api-access-c4f6d\") pod \"migrator-59844c95c7-hvjmw\" (UID: \"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058000 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8dd718-bd04-4679-a651-816b958567b5-config\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058018 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8dd718-bd04-4679-a651-816b958567b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058018 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dzrz\" (UniqueName: \"kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058089 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6k28\" (UniqueName: \"kubernetes.io/projected/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-kube-api-access-g6k28\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058113 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a65c06-00bf-475f-bf73-b09e13502a7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058159 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqzq\" (UniqueName: \"kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058187 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058224 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd6c6\" (UniqueName: \"kubernetes.io/projected/a1a65c06-00bf-475f-bf73-b09e13502a7c-kube-api-access-dd6c6\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058241 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c69ee0f-6cee-421a-a605-ec0b946a22c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058265 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058278 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058330 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058398 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058417 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d99a778d-271c-476f-a9d5-99b7e8445056-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058434 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.058690 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1a65c06-00bf-475f-bf73-b09e13502a7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.059316 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.059518 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-config\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.059913 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.059967 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-config\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.060106 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-trusted-ca\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.060286 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8dd718-bd04-4679-a651-816b958567b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.060555 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-serving-cert\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.060587 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.061471 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.061479 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.061630 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d99a778d-271c-476f-a9d5-99b7e8445056-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.062255 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c69ee0f-6cee-421a-a605-ec0b946a22c6-images\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.062437 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8dd718-bd04-4679-a651-816b958567b5-config\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063149 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063151 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063251 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063270 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063517 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.063773 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.064748 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.066012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a65c06-00bf-475f-bf73-b09e13502a7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.066065 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c69ee0f-6cee-421a-a605-ec0b946a22c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.066895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.067014 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.068783 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d99a778d-271c-476f-a9d5-99b7e8445056-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.076131 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.097123 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.116734 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.156781 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.158973 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.159011 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.159059 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1f067e87-dade-4a5b-b69c-cc135c2b6894-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.159171 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899hz\" (UniqueName: \"kubernetes.io/projected/1f067e87-dade-4a5b-b69c-cc135c2b6894-kube-api-access-899hz\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.159207 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adbddbdc-7896-452b-aa1e-e15fae2287ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.159341 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4f6d\" (UniqueName: \"kubernetes.io/projected/ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701-kube-api-access-c4f6d\") pod \"migrator-59844c95c7-hvjmw\" (UID: \"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.176851 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.208973 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.216314 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.236546 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.256597 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.277072 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.297511 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.317656 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.336758 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.356237 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.377216 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.397311 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.436211 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.444864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmbz\" (UniqueName: \"kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz\") pod \"image-pruner-29491200-w2pbt\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.459369 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.476404 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.497265 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.516381 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.536603 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.556144 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.577020 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.597010 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.616707 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.636627 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.658283 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.684009 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5l2m\" (UniqueName: \"kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m\") pod \"route-controller-manager-6576b87f9c-46v4c\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.697054 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.703230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2zm\" (UniqueName: \"kubernetes.io/projected/563a4e6d-5e85-4eb3-8078-69c0484d6b93-kube-api-access-vn2zm\") pod \"authentication-operator-69f744f599-s2dp4\" (UID: \"563a4e6d-5e85-4eb3-8078-69c0484d6b93\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.716659 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.721963 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.736677 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.757445 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.777387 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.799737 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.804271 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.840257 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.852704 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtn47\" (UniqueName: \"kubernetes.io/projected/6e416439-768c-4148-a44a-2c1d155e32fb-kube-api-access-mtn47\") pod \"apiserver-7bbb656c7d-ggm8j\" (UID: \"6e416439-768c-4148-a44a-2c1d155e32fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.856901 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.876602 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.896965 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.938032 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l57\" (UniqueName: \"kubernetes.io/projected/e0b3fdf9-0ced-429c-8169-9cf860c9e46e-kube-api-access-q4l57\") pod \"console-f9d7485db-tsvt8\" (UID: \"e0b3fdf9-0ced-429c-8169-9cf860c9e46e\") " pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.944020 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29491200-w2pbt"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.953325 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2x8\" (UniqueName: \"kubernetes.io/projected/a1af973e-fec0-41b4-83eb-88d80e67a17b-kube-api-access-9n2x8\") pod \"openshift-apiserver-operator-796bbdcf4f-rlsvp\" (UID: \"a1af973e-fec0-41b4-83eb-88d80e67a17b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:14 crc kubenswrapper[4786]: W0127 00:08:14.959831 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0200bfc5_4623_473a_822f_34fe28080b1c.slice/crio-2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57 WatchSource:0}: Error finding container 2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57: Status 404 returned error can't find the container with id 2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57 Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.974539 4786 request.go:700] Waited for 1.001081758s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.975441 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5qf\" (UniqueName: \"kubernetes.io/projected/744b6d0c-e657-47d3-9ea5-fa7b4e25e749-kube-api-access-hm5qf\") pod \"machine-approver-56656f9798-cvbwh\" (UID: \"744b6d0c-e657-47d3-9ea5-fa7b4e25e749\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.980617 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.986010 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29491200-w2pbt" event={"ID":"0200bfc5-4623-473a-822f-34fe28080b1c","Type":"ContainerStarted","Data":"2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57"} Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.986787 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" Jan 27 00:08:14 crc kubenswrapper[4786]: I0127 00:08:14.994090 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r79bs\" (UniqueName: \"kubernetes.io/projected/a68510e4-f5ff-42ec-bb5d-c0758a4de003-kube-api-access-r79bs\") pod \"cluster-samples-operator-665b6dd947-l4h5c\" (UID: \"a68510e4-f5ff-42ec-bb5d-c0758a4de003\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.009300 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.010903 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qkb\" (UniqueName: \"kubernetes.io/projected/dcac63ca-ef6c-4848-8744-e29c2af9e390-kube-api-access-r6qkb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pq42\" (UID: \"dcac63ca-ef6c-4848-8744-e29c2af9e390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:15 crc kubenswrapper[4786]: W0127 00:08:15.016222 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod415679b9_bb9e_4931_8260_103aa2e42237.slice/crio-8bd4a403d08e016fb963bbf86f2a4d41dd937aa38c9322e16470d3b4c82a109a WatchSource:0}: Error finding container 8bd4a403d08e016fb963bbf86f2a4d41dd937aa38c9322e16470d3b4c82a109a: Status 404 returned error can't find the container with id 8bd4a403d08e016fb963bbf86f2a4d41dd937aa38c9322e16470d3b4c82a109a Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.017099 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.034973 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.040935 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.058480 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.060513 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1f067e87-dade-4a5b-b69c-cc135c2b6894-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.069745 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s2dp4"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.070141 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.076830 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.077194 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.087107 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.104963 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.117088 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.137242 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.155884 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: E0127 00:08:15.161068 4786 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 00:08:15 crc kubenswrapper[4786]: E0127 00:08:15.161133 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert podName:adbddbdc-7896-452b-aa1e-e15fae2287ba nodeName:}" failed. No retries permitted until 2026-01-27 00:08:15.661117102 +0000 UTC m=+141.144804145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert") pod "kube-controller-manager-operator-78b949d7b-865v4" (UID: "adbddbdc-7896-452b-aa1e-e15fae2287ba") : failed to sync secret cache: timed out waiting for the condition Jan 27 00:08:15 crc kubenswrapper[4786]: E0127 00:08:15.161355 4786 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 00:08:15 crc kubenswrapper[4786]: E0127 00:08:15.161385 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config podName:adbddbdc-7896-452b-aa1e-e15fae2287ba nodeName:}" failed. No retries permitted until 2026-01-27 00:08:15.661378609 +0000 UTC m=+141.145065652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config") pod "kube-controller-manager-operator-78b949d7b-865v4" (UID: "adbddbdc-7896-452b-aa1e-e15fae2287ba") : failed to sync configmap cache: timed out waiting for the condition Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.174463 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.176220 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 00:08:15 crc kubenswrapper[4786]: W0127 00:08:15.187938 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1af973e_fec0_41b4_83eb_88d80e67a17b.slice/crio-5c9f99ae8c939988e7e5d74c312dcc6dc6d62e7c8c77466d8442a3284ade39d8 WatchSource:0}: Error finding container 5c9f99ae8c939988e7e5d74c312dcc6dc6d62e7c8c77466d8442a3284ade39d8: Status 404 returned error can't find the container with id 5c9f99ae8c939988e7e5d74c312dcc6dc6d62e7c8c77466d8442a3284ade39d8 Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.196092 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.216869 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.228986 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tsvt8"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.236476 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: W0127 00:08:15.252941 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0b3fdf9_0ced_429c_8169_9cf860c9e46e.slice/crio-6dd38268acff8c1fe98d87af3f286a85dd7f7252cac02ddc100715307588bdf6 WatchSource:0}: Error finding container 6dd38268acff8c1fe98d87af3f286a85dd7f7252cac02ddc100715307588bdf6: Status 404 returned error can't find the container with id 6dd38268acff8c1fe98d87af3f286a85dd7f7252cac02ddc100715307588bdf6 Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.255366 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.262324 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 00:08:15 crc kubenswrapper[4786]: W0127 00:08:15.277224 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcac63ca_ef6c_4848_8744_e29c2af9e390.slice/crio-194ac1cad89ed6c6120e4b4923e5c6fada68f619aef1bbf35507175647403fdf WatchSource:0}: Error finding container 194ac1cad89ed6c6120e4b4923e5c6fada68f619aef1bbf35507175647403fdf: Status 404 returned error can't find the container with id 194ac1cad89ed6c6120e4b4923e5c6fada68f619aef1bbf35507175647403fdf Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.277727 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.295846 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.295966 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.316926 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.336609 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.357954 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.371634 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j"] Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.377397 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 00:08:15 crc kubenswrapper[4786]: W0127 00:08:15.385804 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e416439_768c_4148_a44a_2c1d155e32fb.slice/crio-925e627ad72e7aa7399614ba755f4743825e0b2c905da92a19e937d863e73506 WatchSource:0}: Error finding container 925e627ad72e7aa7399614ba755f4743825e0b2c905da92a19e937d863e73506: Status 404 returned error can't find the container with id 925e627ad72e7aa7399614ba755f4743825e0b2c905da92a19e937d863e73506 Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.395628 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.416713 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.436460 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.456036 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.477912 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.496528 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.516353 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.563154 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.577806 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.596136 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.617014 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.636062 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.656180 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.676687 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.689483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.689513 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.691234 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbddbdc-7896-452b-aa1e-e15fae2287ba-config\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.696123 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.697761 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbddbdc-7896-452b-aa1e-e15fae2287ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.716209 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.736952 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.756267 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.777806 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.797176 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.817698 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.836335 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.857214 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.876918 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.897529 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.917599 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.937312 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.957810 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.974632 4786 request.go:700] Waited for 1.917773412s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.977694 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.996180 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" event={"ID":"744b6d0c-e657-47d3-9ea5-fa7b4e25e749","Type":"ContainerStarted","Data":"523785b01c35db85448e05d72f69e5bef71a6d722c50fe3d3c606c155eb9b61c"} Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.996549 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" event={"ID":"744b6d0c-e657-47d3-9ea5-fa7b4e25e749","Type":"ContainerStarted","Data":"d7e850489d2e792f2e6fbad74efb60d79d62462cc14a729b1b83ccdbf4b0dd51"} Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.996840 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" event={"ID":"744b6d0c-e657-47d3-9ea5-fa7b4e25e749","Type":"ContainerStarted","Data":"5233990a7122ec192a265f82ee0babcd2213b97fdce22ce0a32c692abe82e028"} Jan 27 00:08:15 crc kubenswrapper[4786]: I0127 00:08:15.998692 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.005115 4786 generic.go:334] "Generic (PLEG): container finished" podID="6e416439-768c-4148-a44a-2c1d155e32fb" containerID="9d8854e6e16f545061afe0ffafc5bf9e2c0bc46e51889fc1773e5a8e0fcf4487" exitCode=0 Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.005340 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" event={"ID":"6e416439-768c-4148-a44a-2c1d155e32fb","Type":"ContainerDied","Data":"9d8854e6e16f545061afe0ffafc5bf9e2c0bc46e51889fc1773e5a8e0fcf4487"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.005463 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" event={"ID":"6e416439-768c-4148-a44a-2c1d155e32fb","Type":"ContainerStarted","Data":"925e627ad72e7aa7399614ba755f4743825e0b2c905da92a19e937d863e73506"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.010413 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" event={"ID":"a1af973e-fec0-41b4-83eb-88d80e67a17b","Type":"ContainerStarted","Data":"f581dcc161fa491ad0f6fc554efd00b6c0431cb2a4dfadb7b2901fd55cc5c8a2"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.010504 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" event={"ID":"a1af973e-fec0-41b4-83eb-88d80e67a17b","Type":"ContainerStarted","Data":"5c9f99ae8c939988e7e5d74c312dcc6dc6d62e7c8c77466d8442a3284ade39d8"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.028308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" event={"ID":"dcac63ca-ef6c-4848-8744-e29c2af9e390","Type":"ContainerStarted","Data":"065ca407111d7360f81ae02b1533a5dcad34fbc4b2df4761403eae0bd0f450be"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.028464 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" event={"ID":"dcac63ca-ef6c-4848-8744-e29c2af9e390","Type":"ContainerStarted","Data":"194ac1cad89ed6c6120e4b4923e5c6fada68f619aef1bbf35507175647403fdf"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.031011 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" event={"ID":"563a4e6d-5e85-4eb3-8078-69c0484d6b93","Type":"ContainerStarted","Data":"ca88f11c0440008bd65373c77e5764cd67dbb7507c0cd1a9222e07e02ca1881a"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.031187 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" event={"ID":"563a4e6d-5e85-4eb3-8078-69c0484d6b93","Type":"ContainerStarted","Data":"7b98bc4d1fcf00f071dcb49d47c96cd7af4702c270ef995caab650161c20bd15"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.035047 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" event={"ID":"415679b9-bb9e-4931-8260-103aa2e42237","Type":"ContainerStarted","Data":"6ca4889e4895ee1cd8e258a50f8edbe177c3cf6164014ad14ff267919e943ee8"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.035289 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" event={"ID":"415679b9-bb9e-4931-8260-103aa2e42237","Type":"ContainerStarted","Data":"8bd4a403d08e016fb963bbf86f2a4d41dd937aa38c9322e16470d3b4c82a109a"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.036423 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.043176 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29491200-w2pbt" event={"ID":"0200bfc5-4623-473a-822f-34fe28080b1c","Type":"ContainerStarted","Data":"12cb81b0bbf952c697c8bac09f42d09c205e59f6e329d9dd0c908ac727ce8a45"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.049495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tsvt8" event={"ID":"e0b3fdf9-0ced-429c-8169-9cf860c9e46e","Type":"ContainerStarted","Data":"c2e9a4335449cb32cf2ccd2cc5749f48acf7faf2c44400684c87ebdfdf98a3d6"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.050043 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tsvt8" event={"ID":"e0b3fdf9-0ced-429c-8169-9cf860c9e46e","Type":"ContainerStarted","Data":"6dd38268acff8c1fe98d87af3f286a85dd7f7252cac02ddc100715307588bdf6"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.051401 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dzrz\" (UniqueName: \"kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz\") pod \"oauth-openshift-558db77b4-vzqm5\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.055631 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" event={"ID":"a68510e4-f5ff-42ec-bb5d-c0758a4de003","Type":"ContainerStarted","Data":"e95f1563b4fda9b18b5a5b2f2786de68a56cffedb78078d7b6791e12db1d4667"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.055699 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" event={"ID":"a68510e4-f5ff-42ec-bb5d-c0758a4de003","Type":"ContainerStarted","Data":"02b98a313def261b5708a42d6df81741bf200bde072ff58f3cefa8caa0dc5fcb"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.055718 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" event={"ID":"a68510e4-f5ff-42ec-bb5d-c0758a4de003","Type":"ContainerStarted","Data":"d061e58b1e5f7c7e18cc89f1ea50d5bc759cc824ed76dfcad5f25f2f53277713"} Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.067709 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.084432 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25598\" (UniqueName: \"kubernetes.io/projected/d99a778d-271c-476f-a9d5-99b7e8445056-kube-api-access-25598\") pod \"cluster-image-registry-operator-dc59b4c8b-r4jw2\" (UID: \"d99a778d-271c-476f-a9d5-99b7e8445056\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.106137 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfh4\" (UniqueName: \"kubernetes.io/projected/d36adcbb-ecba-4efc-90cf-44471fcd3ce4-kube-api-access-zcfh4\") pod \"downloads-7954f5f757-hvzq4\" (UID: \"d36adcbb-ecba-4efc-90cf-44471fcd3ce4\") " pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.116853 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.119983 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6b2\" (UniqueName: \"kubernetes.io/projected/0c69ee0f-6cee-421a-a605-ec0b946a22c6-kube-api-access-jj6b2\") pod \"machine-api-operator-5694c8668f-5zhf8\" (UID: \"0c69ee0f-6cee-421a-a605-ec0b946a22c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.134855 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.142915 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6k28\" (UniqueName: \"kubernetes.io/projected/ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9-kube-api-access-g6k28\") pod \"console-operator-58897d9998-vbbdx\" (UID: \"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9\") " pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.154426 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8dd718-bd04-4679-a651-816b958567b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5gz28\" (UID: \"aa8dd718-bd04-4679-a651-816b958567b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.171116 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd6c6\" (UniqueName: \"kubernetes.io/projected/a1a65c06-00bf-475f-bf73-b09e13502a7c-kube-api-access-dd6c6\") pod \"openshift-config-operator-7777fb866f-pdz5s\" (UID: \"a1a65c06-00bf-475f-bf73-b09e13502a7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.197382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqzq\" (UniqueName: \"kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq\") pod \"controller-manager-879f6c89f-l7x9t\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.247223 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899hz\" (UniqueName: \"kubernetes.io/projected/1f067e87-dade-4a5b-b69c-cc135c2b6894-kube-api-access-899hz\") pod \"multus-admission-controller-857f4d67dd-nzmgs\" (UID: \"1f067e87-dade-4a5b-b69c-cc135c2b6894\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.262380 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adbddbdc-7896-452b-aa1e-e15fae2287ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-865v4\" (UID: \"adbddbdc-7896-452b-aa1e-e15fae2287ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.272184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4f6d\" (UniqueName: \"kubernetes.io/projected/ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701-kube-api-access-c4f6d\") pod \"migrator-59844c95c7-hvjmw\" (UID: \"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.297990 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqvvl\" (UniqueName: \"kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298051 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-serving-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cjc\" (UniqueName: \"kubernetes.io/projected/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-kube-api-access-d8cjc\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298372 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxtl\" (UniqueName: \"kubernetes.io/projected/d5ebf425-82c6-42e9-acee-d09295b00d77-kube-api-access-9sxtl\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298419 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-client\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298457 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgms4\" (UniqueName: \"kubernetes.io/projected/f68d0d18-9904-432a-bcf6-8791e8a2fee0-kube-api-access-kgms4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298484 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298501 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-metrics-certs\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298659 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-config\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298684 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-default-certificate\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298701 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n659\" (UniqueName: \"kubernetes.io/projected/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-kube-api-access-6n659\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298737 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rw92\" (UniqueName: \"kubernetes.io/projected/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-kube-api-access-7rw92\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298753 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-client\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298779 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298798 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1df1f90-69b0-4b9f-b262-6c569b002d6c-metrics-tls\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298825 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298839 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b2300e-019c-4184-bab7-80bace36ad38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298889 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298921 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298942 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzxq\" (UniqueName: \"kubernetes.io/projected/3b6d855b-f5f5-4405-b9f1-f4852f66e042-kube-api-access-2rzxq\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298970 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.298989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.299007 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-config\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.299046 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-audit\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.299064 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-key\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.299113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496385e4-0cf2-469b-bfab-08c939c34912-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.299524 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:16.799507141 +0000 UTC m=+142.283194184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300186 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9588d\" (UniqueName: \"kubernetes.io/projected/694929a3-e8eb-43f6-822b-58d265787d3e-kube-api-access-9588d\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300232 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b6d855b-f5f5-4405-b9f1-f4852f66e042-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300256 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-service-ca-bundle\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300373 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68d0d18-9904-432a-bcf6-8791e8a2fee0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-serving-cert\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.300977 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgss\" (UniqueName: \"kubernetes.io/projected/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-kube-api-access-6hgss\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.301646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496385e4-0cf2-469b-bfab-08c939c34912-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.301961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.302120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-serving-cert\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.302157 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.302494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b2300e-019c-4184-bab7-80bace36ad38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303099 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-encryption-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303151 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303175 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8rk\" (UniqueName: \"kubernetes.io/projected/39b2300e-019c-4184-bab7-80bace36ad38-kube-api-access-lv8rk\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303249 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-serving-cert\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-images\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303322 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303478 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-audit-dir\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.303552 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-image-import-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304014 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304105 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496385e4-0cf2-469b-bfab-08c939c34912-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pq7t\" (UniqueName: \"kubernetes.io/projected/760d71b7-4d63-4653-a096-ce45c0d35505-kube-api-access-2pq7t\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k829b\" (UniqueName: \"kubernetes.io/projected/4dc22252-c749-471b-9eb2-71bc3e451652-kube-api-access-k829b\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304385 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9fx\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304424 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.304447 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-proxy-tls\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306450 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/694929a3-e8eb-43f6-822b-58d265787d3e-metrics-tls\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306551 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-cabundle\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306622 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b6d855b-f5f5-4405-b9f1-f4852f66e042-proxy-tls\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306642 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdnk\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-kube-api-access-xjdnk\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306660 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-srv-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306681 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-node-pullsecrets\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306890 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1df1f90-69b0-4b9f-b262-6c569b002d6c-trusted-ca\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.306907 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-stats-auth\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.349867 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.364967 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.365656 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407481 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.407711 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:16.907673633 +0000 UTC m=+142.391360686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407773 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-srv-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407814 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-stats-auth\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407840 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-node-pullsecrets\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407865 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1df1f90-69b0-4b9f-b262-6c569b002d6c-trusted-ca\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407888 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-serving-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407911 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-webhook-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407937 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqvvl\" (UniqueName: \"kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.407985 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-socket-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-node-bootstrap-token\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cjc\" (UniqueName: \"kubernetes.io/projected/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-kube-api-access-d8cjc\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408038 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-node-pullsecrets\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408082 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxtl\" (UniqueName: \"kubernetes.io/projected/d5ebf425-82c6-42e9-acee-d09295b00d77-kube-api-access-9sxtl\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408109 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-client\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408885 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgms4\" (UniqueName: \"kubernetes.io/projected/f68d0d18-9904-432a-bcf6-8791e8a2fee0-kube-api-access-kgms4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.408925 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69jsh\" (UniqueName: \"kubernetes.io/projected/e411ada2-c02e-4f63-887a-e062bfacf5cd-kube-api-access-69jsh\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.409026 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.409351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1df1f90-69b0-4b9f-b262-6c569b002d6c-trusted-ca\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.409629 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.409860 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-serving-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412311 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-metrics-certs\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412382 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74919585-4bc6-4192-8f7d-6648c409a909-metrics-tls\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412414 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe040435-1925-40cd-9c29-9ed97e57516c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412486 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-config\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412543 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-default-certificate\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412692 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rw92\" (UniqueName: \"kubernetes.io/projected/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-kube-api-access-7rw92\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412739 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n659\" (UniqueName: \"kubernetes.io/projected/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-kube-api-access-6n659\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74919585-4bc6-4192-8f7d-6648c409a909-config-volume\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412805 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-plugins-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412830 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-client\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412878 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412903 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1df1f90-69b0-4b9f-b262-6c569b002d6c-metrics-tls\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412921 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-certs\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412961 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b2300e-019c-4184-bab7-80bace36ad38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.412981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8l4g\" (UniqueName: \"kubernetes.io/projected/1e618028-3c51-4238-b319-f43268fb9d03-kube-api-access-w8l4g\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413019 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413038 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413055 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-registration-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413088 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413112 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzxq\" (UniqueName: \"kubernetes.io/projected/3b6d855b-f5f5-4405-b9f1-f4852f66e042-kube-api-access-2rzxq\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413223 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-config\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413269 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e411ada2-c02e-4f63-887a-e062bfacf5cd-cert\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413291 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-audit\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413308 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-key\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-csi-data-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496385e4-0cf2-469b-bfab-08c939c34912-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413393 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9588d\" (UniqueName: \"kubernetes.io/projected/694929a3-e8eb-43f6-822b-58d265787d3e-kube-api-access-9588d\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413430 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b6d855b-f5f5-4405-b9f1-f4852f66e042-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413462 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-service-ca-bundle\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413485 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68d0d18-9904-432a-bcf6-8791e8a2fee0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413555 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzp8\" (UniqueName: \"kubernetes.io/projected/999ae037-c517-4444-8a98-29fe9b8f937d-kube-api-access-gwzp8\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413626 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-serving-cert\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413704 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgss\" (UniqueName: \"kubernetes.io/projected/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-kube-api-access-6hgss\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413732 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413823 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm799\" (UniqueName: \"kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413896 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496385e4-0cf2-469b-bfab-08c939c34912-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413963 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-mountpoint-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.413986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414035 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svb4f\" (UniqueName: \"kubernetes.io/projected/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-kube-api-access-svb4f\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414059 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-serving-cert\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414116 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b2300e-019c-4184-bab7-80bace36ad38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414197 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-encryption-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414225 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414289 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8rk\" (UniqueName: \"kubernetes.io/projected/39b2300e-019c-4184-bab7-80bace36ad38-kube-api-access-lv8rk\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414404 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48nw\" (UniqueName: \"kubernetes.io/projected/74919585-4bc6-4192-8f7d-6648c409a909-kube-api-access-d48nw\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414674 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-serving-cert\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414716 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414956 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-images\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414980 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8st6\" (UniqueName: \"kubernetes.io/projected/b58be9f2-cdec-47b7-8400-62f800b6dbe1-kube-api-access-w8st6\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415023 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415043 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-audit-dir\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415069 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-image-import-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-tmpfs\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415222 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415248 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496385e4-0cf2-469b-bfab-08c939c34912-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pq7t\" (UniqueName: \"kubernetes.io/projected/760d71b7-4d63-4653-a096-ce45c0d35505-kube-api-access-2pq7t\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415315 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k829b\" (UniqueName: \"kubernetes.io/projected/4dc22252-c749-471b-9eb2-71bc3e451652-kube-api-access-k829b\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415344 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9fx\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-stats-auth\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-proxy-tls\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415707 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415740 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snskr\" (UniqueName: \"kubernetes.io/projected/fe040435-1925-40cd-9c29-9ed97e57516c-kube-api-access-snskr\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415791 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415819 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/694929a3-e8eb-43f6-822b-58d265787d3e-metrics-tls\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415843 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-cabundle\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b6d855b-f5f5-4405-b9f1-f4852f66e042-proxy-tls\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415895 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdnk\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-kube-api-access-xjdnk\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.415923 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-srv-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.417222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b2300e-019c-4184-bab7-80bace36ad38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.417970 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:16.917955856 +0000 UTC m=+142.401642899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.420100 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-service-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.420178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-config\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.420492 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-ca\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.420968 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.421030 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-cabundle\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.421779 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.421996 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.422368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-proxy-tls\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.422845 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.423152 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-service-ca-bundle\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.423393 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.423707 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-default-certificate\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.423955 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.424606 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ebf425-82c6-42e9-acee-d09295b00d77-config\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.425019 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-audit\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.426079 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-serving-cert\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.426415 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.427059 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68d0d18-9904-432a-bcf6-8791e8a2fee0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.431850 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/760d71b7-4d63-4653-a096-ce45c0d35505-audit-dir\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.433294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-serving-cert\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.434692 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.437755 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.414953 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-etcd-client\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.445359 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-images\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.445490 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.446031 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/694929a3-e8eb-43f6-822b-58d265787d3e-metrics-tls\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.446905 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.447096 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-serving-cert\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.447180 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496385e4-0cf2-469b-bfab-08c939c34912-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.447338 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b6d855b-f5f5-4405-b9f1-f4852f66e042-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.447467 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/760d71b7-4d63-4653-a096-ce45c0d35505-image-import-ca\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.447731 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-signing-key\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.448589 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.448737 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.449438 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.451877 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.460022 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/760d71b7-4d63-4653-a096-ce45c0d35505-encryption-config\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.461248 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-metrics-certs\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.461354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dc22252-c749-471b-9eb2-71bc3e451652-srv-cert\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.461363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b2300e-019c-4184-bab7-80bace36ad38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.461503 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1df1f90-69b0-4b9f-b262-6c569b002d6c-metrics-tls\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.461589 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ebf425-82c6-42e9-acee-d09295b00d77-etcd-client\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.464274 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496385e4-0cf2-469b-bfab-08c939c34912-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.464526 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2"] Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.470161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cjc\" (UniqueName: \"kubernetes.io/projected/dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5-kube-api-access-d8cjc\") pod \"machine-config-operator-74547568cd-zp9v5\" (UID: \"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.474855 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxtl\" (UniqueName: \"kubernetes.io/projected/d5ebf425-82c6-42e9-acee-d09295b00d77-kube-api-access-9sxtl\") pod \"etcd-operator-b45778765-pmhwf\" (UID: \"d5ebf425-82c6-42e9-acee-d09295b00d77\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.492810 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b6d855b-f5f5-4405-b9f1-f4852f66e042-proxy-tls\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.494088 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.501272 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqvvl\" (UniqueName: \"kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl\") pod \"marketplace-operator-79b997595-rgfxb\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.510002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.514875 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgms4\" (UniqueName: \"kubernetes.io/projected/f68d0d18-9904-432a-bcf6-8791e8a2fee0-kube-api-access-kgms4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvxx5\" (UID: \"f68d0d18-9904-432a-bcf6-8791e8a2fee0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517047 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517386 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517441 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm799\" (UniqueName: \"kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-mountpoint-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517505 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svb4f\" (UniqueName: \"kubernetes.io/projected/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-kube-api-access-svb4f\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.517525 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.519943 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48nw\" (UniqueName: \"kubernetes.io/projected/74919585-4bc6-4192-8f7d-6648c409a909-kube-api-access-d48nw\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520022 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8st6\" (UniqueName: \"kubernetes.io/projected/b58be9f2-cdec-47b7-8400-62f800b6dbe1-kube-api-access-w8st6\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-tmpfs\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snskr\" (UniqueName: \"kubernetes.io/projected/fe040435-1925-40cd-9c29-9ed97e57516c-kube-api-access-snskr\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520234 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-srv-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-webhook-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520299 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-socket-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-node-bootstrap-token\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520383 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69jsh\" (UniqueName: \"kubernetes.io/projected/e411ada2-c02e-4f63-887a-e062bfacf5cd-kube-api-access-69jsh\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520444 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74919585-4bc6-4192-8f7d-6648c409a909-metrics-tls\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520472 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe040435-1925-40cd-9c29-9ed97e57516c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520550 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74919585-4bc6-4192-8f7d-6648c409a909-config-volume\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520600 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-plugins-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520641 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-certs\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520675 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.520919 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.020898018 +0000 UTC m=+142.504585141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520673 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8l4g\" (UniqueName: \"kubernetes.io/projected/1e618028-3c51-4238-b319-f43268fb9d03-kube-api-access-w8l4g\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.520992 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-registration-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.521022 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e411ada2-c02e-4f63-887a-e062bfacf5cd-cert\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.521050 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-csi-data-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.521080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzp8\" (UniqueName: \"kubernetes.io/projected/999ae037-c517-4444-8a98-29fe9b8f937d-kube-api-access-gwzp8\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.521401 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-registration-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.525056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.525182 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-mountpoint-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.527306 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-csi-data-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.527368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-socket-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.528657 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-tmpfs\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.529308 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.529889 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.529939 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e411ada2-c02e-4f63-887a-e062bfacf5cd-cert\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.529963 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1e618028-3c51-4238-b319-f43268fb9d03-plugins-dir\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.530539 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74919585-4bc6-4192-8f7d-6648c409a909-config-volume\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.530732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.531730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.535272 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b58be9f2-cdec-47b7-8400-62f800b6dbe1-srv-cert\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.539025 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-webhook-cert\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.540039 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-certs\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.540715 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/999ae037-c517-4444-8a98-29fe9b8f937d-node-bootstrap-token\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.541065 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe040435-1925-40cd-9c29-9ed97e57516c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.542307 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.547230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74919585-4bc6-4192-8f7d-6648c409a909-metrics-tls\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.548472 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.550601 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rw92\" (UniqueName: \"kubernetes.io/projected/ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824-kube-api-access-7rw92\") pod \"service-ca-9c57cc56f-f96jt\" (UID: \"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824\") " pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.577037 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.583652 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n659\" (UniqueName: \"kubernetes.io/projected/a321b23e-78f2-4c8b-a50a-75fdde6a4d8b-kube-api-access-6n659\") pod \"service-ca-operator-777779d784-8pgnx\" (UID: \"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.602901 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496385e4-0cf2-469b-bfab-08c939c34912-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7pgkx\" (UID: \"496385e4-0cf2-469b-bfab-08c939c34912\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.614849 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzxq\" (UniqueName: \"kubernetes.io/projected/3b6d855b-f5f5-4405-b9f1-f4852f66e042-kube-api-access-2rzxq\") pod \"machine-config-controller-84d6567774-l42qz\" (UID: \"3b6d855b-f5f5-4405-b9f1-f4852f66e042\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.625084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.625561 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.125547689 +0000 UTC m=+142.609234722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.638875 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.666514 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgss\" (UniqueName: \"kubernetes.io/projected/f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb-kube-api-access-6hgss\") pod \"router-default-5444994796-qc8x5\" (UID: \"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb\") " pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.690504 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdnk\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-kube-api-access-xjdnk\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.695801 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1df1f90-69b0-4b9f-b262-6c569b002d6c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-66gz5\" (UID: \"f1df1f90-69b0-4b9f-b262-6c569b002d6c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.720877 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hvzq4"] Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.721993 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pq7t\" (UniqueName: \"kubernetes.io/projected/760d71b7-4d63-4653-a096-ce45c0d35505-kube-api-access-2pq7t\") pod \"apiserver-76f77b778f-7k2vr\" (UID: \"760d71b7-4d63-4653-a096-ce45c0d35505\") " pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.727117 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.727480 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.227465033 +0000 UTC m=+142.711152066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.733228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k829b\" (UniqueName: \"kubernetes.io/projected/4dc22252-c749-471b-9eb2-71bc3e451652-kube-api-access-k829b\") pod \"olm-operator-6b444d44fb-7vzpl\" (UID: \"4dc22252-c749-471b-9eb2-71bc3e451652\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.746301 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.751045 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9fx\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.761272 4786 csr.go:261] certificate signing request csr-8rf9l is approved, waiting to be issued Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.767316 4786 csr.go:257] certificate signing request csr-8rf9l is issued Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.771940 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.775228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9588d\" (UniqueName: \"kubernetes.io/projected/694929a3-e8eb-43f6-822b-58d265787d3e-kube-api-access-9588d\") pod \"dns-operator-744455d44c-fmqkz\" (UID: \"694929a3-e8eb-43f6-822b-58d265787d3e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.782886 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.789228 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.794357 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8rk\" (UniqueName: \"kubernetes.io/projected/39b2300e-019c-4184-bab7-80bace36ad38-kube-api-access-lv8rk\") pod \"kube-storage-version-migrator-operator-b67b599dd-fkv4p\" (UID: \"39b2300e-019c-4184-bab7-80bace36ad38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.795228 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.805498 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.817151 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8l4g\" (UniqueName: \"kubernetes.io/projected/1e618028-3c51-4238-b319-f43268fb9d03-kube-api-access-w8l4g\") pod \"csi-hostpathplugin-gqmm8\" (UID: \"1e618028-3c51-4238-b319-f43268fb9d03\") " pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.829035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.829412 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.329396385 +0000 UTC m=+142.813083418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.829636 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.835676 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.851778 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm799\" (UniqueName: \"kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799\") pod \"collect-profiles-29491200-glqtq\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.854966 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.855659 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzp8\" (UniqueName: \"kubernetes.io/projected/999ae037-c517-4444-8a98-29fe9b8f937d-kube-api-access-gwzp8\") pod \"machine-config-server-q5rqx\" (UID: \"999ae037-c517-4444-8a98-29fe9b8f937d\") " pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.856135 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.869960 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svb4f\" (UniqueName: \"kubernetes.io/projected/e10fe708-13fe-4f4e-8a85-7ad989a1ac6a-kube-api-access-svb4f\") pod \"packageserver-d55dfcdfc-hqsbf\" (UID: \"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.884138 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.889764 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.890135 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48nw\" (UniqueName: \"kubernetes.io/projected/74919585-4bc6-4192-8f7d-6648c409a909-kube-api-access-d48nw\") pod \"dns-default-9f82m\" (UID: \"74919585-4bc6-4192-8f7d-6648c409a909\") " pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.908158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.916440 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8st6\" (UniqueName: \"kubernetes.io/projected/b58be9f2-cdec-47b7-8400-62f800b6dbe1-kube-api-access-w8st6\") pod \"catalog-operator-68c6474976-s9c88\" (UID: \"b58be9f2-cdec-47b7-8400-62f800b6dbe1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.929731 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.929888 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.429862417 +0000 UTC m=+142.913549460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.930032 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:16 crc kubenswrapper[4786]: E0127 00:08:16.930304 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.43028995 +0000 UTC m=+142.913976993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.936228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snskr\" (UniqueName: \"kubernetes.io/projected/fe040435-1925-40cd-9c29-9ed97e57516c-kube-api-access-snskr\") pod \"package-server-manager-789f6589d5-xjvbw\" (UID: \"fe040435-1925-40cd-9c29-9ed97e57516c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.937272 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.945275 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q5rqx" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.963368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69jsh\" (UniqueName: \"kubernetes.io/projected/e411ada2-c02e-4f63-887a-e062bfacf5cd-kube-api-access-69jsh\") pod \"ingress-canary-4dzh4\" (UID: \"e411ada2-c02e-4f63-887a-e062bfacf5cd\") " pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.994978 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vbbdx"] Jan 27 00:08:16 crc kubenswrapper[4786]: I0127 00:08:16.999311 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5zhf8"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.056208 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.056737 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.556717621 +0000 UTC m=+143.040404654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.056850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.061702 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pmhwf"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.087909 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.101695 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" event={"ID":"5c2aae55-7128-4ccc-bcff-ca7775e8035a","Type":"ContainerStarted","Data":"574083d8a3b3d608c3767277651d9545bda1833fde574addabf77da4e8ae9e08"} Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.103824 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.121809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qc8x5" event={"ID":"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb","Type":"ContainerStarted","Data":"0b4fc8ead5460836fcbc7ceea44b9706ec5d4c6e99715157cb872db0771b5ceb"} Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.123355 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" event={"ID":"d99a778d-271c-476f-a9d5-99b7e8445056","Type":"ContainerStarted","Data":"229fb14abf8c886e816447d5fbe2f7b9536e2c576113774d492ce39c70de2129"} Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.124748 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" event={"ID":"6e416439-768c-4148-a44a-2c1d155e32fb","Type":"ContainerStarted","Data":"0146d48956f5d969038b27274fbb9e376deef825a99e3e122b4c4367d93e73de"} Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.126276 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hvzq4" event={"ID":"d36adcbb-ecba-4efc-90cf-44471fcd3ce4","Type":"ContainerStarted","Data":"fd70a661e2d1190ca0e8a81cf9c75415d4eee2a6dc8fbf7436c8a0216a166ba4"} Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.157559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.158003 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.657986476 +0000 UTC m=+143.141673519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.174399 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.199084 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.216727 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4dzh4" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.258681 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.260087 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.759671513 +0000 UTC m=+143.243358556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.266172 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.270367 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.769656817 +0000 UTC m=+143.253343860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.292638 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.304356 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.307988 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzmgs"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.314267 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29491200-w2pbt" podStartSLOduration=121.314244057 podStartE2EDuration="2m1.314244057s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.296045559 +0000 UTC m=+142.779732592" watchObservedRunningTime="2026-01-27 00:08:17.314244057 +0000 UTC m=+142.797931100" Jan 27 00:08:17 crc kubenswrapper[4786]: W0127 00:08:17.333931 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6edcfe_c4c0_4c1c_9c50_6eb57938f8f9.slice/crio-c5b9372c696c982e7ac62cd69ea808626ac6ebe53466d5848a519db8e261a988 WatchSource:0}: Error finding container c5b9372c696c982e7ac62cd69ea808626ac6ebe53466d5848a519db8e261a988: Status 404 returned error can't find the container with id c5b9372c696c982e7ac62cd69ea808626ac6ebe53466d5848a519db8e261a988 Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.353501 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.375951 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.376228 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.876215173 +0000 UTC m=+143.359902216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.433364 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.442924 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pq42" podStartSLOduration=121.442910623 podStartE2EDuration="2m1.442910623s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.441290817 +0000 UTC m=+142.924977850" watchObservedRunningTime="2026-01-27 00:08:17.442910623 +0000 UTC m=+142.926597666" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.480281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.480630 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:17.980618427 +0000 UTC m=+143.464305470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: W0127 00:08:17.483762 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9317898f_297f_49d2_b0ae_811986544686.slice/crio-50bb11baf8a4c9413bda7f3deab8bbe3b8079a2c85e188da8f9b3d4b157c512d WatchSource:0}: Error finding container 50bb11baf8a4c9413bda7f3deab8bbe3b8079a2c85e188da8f9b3d4b157c512d: Status 404 returned error can't find the container with id 50bb11baf8a4c9413bda7f3deab8bbe3b8079a2c85e188da8f9b3d4b157c512d Jan 27 00:08:17 crc kubenswrapper[4786]: W0127 00:08:17.484069 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbbdab3_a219_4b1d_b29d_0ad59edbb0d5.slice/crio-bae51c6dfc970391b6a9fa2196fb36fd91f3b389ad2ba0c8a3a1408daa15157e WatchSource:0}: Error finding container bae51c6dfc970391b6a9fa2196fb36fd91f3b389ad2ba0c8a3a1408daa15157e: Status 404 returned error can't find the container with id bae51c6dfc970391b6a9fa2196fb36fd91f3b389ad2ba0c8a3a1408daa15157e Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.499651 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s2dp4" podStartSLOduration=121.499633169 podStartE2EDuration="2m1.499633169s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.490407906 +0000 UTC m=+142.974094949" watchObservedRunningTime="2026-01-27 00:08:17.499633169 +0000 UTC m=+142.983320212" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.501320 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.531435 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.531814 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.537006 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7k2vr"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.581500 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.581881 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.081859871 +0000 UTC m=+143.565546914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.592467 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" podStartSLOduration=121.592449103 podStartE2EDuration="2m1.592449103s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.554460721 +0000 UTC m=+143.038147764" watchObservedRunningTime="2026-01-27 00:08:17.592449103 +0000 UTC m=+143.076136146" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.629665 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz"] Jan 27 00:08:17 crc kubenswrapper[4786]: W0127 00:08:17.661362 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ac94253_4c9b_4dbf_83a5_e582349bbac5.slice/crio-3015ee4f19c38045ff2da75c878ec3848baa02b67192a6eea341280ca7a32689 WatchSource:0}: Error finding container 3015ee4f19c38045ff2da75c878ec3848baa02b67192a6eea341280ca7a32689: Status 404 returned error can't find the container with id 3015ee4f19c38045ff2da75c878ec3848baa02b67192a6eea341280ca7a32689 Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.682852 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.683130 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.183117185 +0000 UTC m=+143.666804218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.770213 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 00:03:16 +0000 UTC, rotation deadline is 2026-12-07 06:09:04.643719331 +0000 UTC Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.770496 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7542h0m46.873225596s for next certificate rotation Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.783495 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.783961 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.283941758 +0000 UTC m=+143.767628801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.810437 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rlsvp" podStartSLOduration=121.810420052 podStartE2EDuration="2m1.810420052s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.808788856 +0000 UTC m=+143.292475899" watchObservedRunningTime="2026-01-27 00:08:17.810420052 +0000 UTC m=+143.294107095" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.823612 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f96jt"] Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.886132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.886555 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.386541681 +0000 UTC m=+143.870228714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.955221 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4h5c" podStartSLOduration=121.955201887 podStartE2EDuration="2m1.955201887s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:17.953499948 +0000 UTC m=+143.437186991" watchObservedRunningTime="2026-01-27 00:08:17.955201887 +0000 UTC m=+143.438888940" Jan 27 00:08:17 crc kubenswrapper[4786]: I0127 00:08:17.986729 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:17 crc kubenswrapper[4786]: E0127 00:08:17.987102 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.487085105 +0000 UTC m=+143.970772148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.088688 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.089093 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.58907768 +0000 UTC m=+144.072764723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.165389 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" event={"ID":"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824","Type":"ContainerStarted","Data":"981fab8fbb0f3e5703b5448ab0b6f9017291ce4e81cfc5d438408655c4bc7fc1"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.166814 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" event={"ID":"a1a65c06-00bf-475f-bf73-b09e13502a7c","Type":"ContainerStarted","Data":"c8f22c8bb9ad793ae75c74f10718b02930e2608ef42175aa8196e102dd1259fa"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.169083 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.174114 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.190381 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" event={"ID":"496385e4-0cf2-469b-bfab-08c939c34912","Type":"ContainerStarted","Data":"b64a2d023c9f4f2e2a05ee007f12ab792f325ec7e9844efaf3c94fe3d967b5e2"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.197450 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" event={"ID":"1f067e87-dade-4a5b-b69c-cc135c2b6894","Type":"ContainerStarted","Data":"473fdde97d40c323c3b0c271094eb80f209b4f613ec60115098791f8a492c292"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.199826 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hvzq4" event={"ID":"d36adcbb-ecba-4efc-90cf-44471fcd3ce4","Type":"ContainerStarted","Data":"f8e41599e08134ed9a7c052048667a0636f3772d5398f363c01822cc8a994d6f"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.200817 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.202311 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-hvzq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 27 00:08:18 crc kubenswrapper[4786]: W0127 00:08:18.202748 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf68d0d18_9904_432a_bcf6_8791e8a2fee0.slice/crio-f18d448d48a7666d8c6986c1043435b127991156cec5afc5fba64b455af9bde7 WatchSource:0}: Error finding container f18d448d48a7666d8c6986c1043435b127991156cec5afc5fba64b455af9bde7: Status 404 returned error can't find the container with id f18d448d48a7666d8c6986c1043435b127991156cec5afc5fba64b455af9bde7 Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.204819 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hvzq4" podUID="d36adcbb-ecba-4efc-90cf-44471fcd3ce4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.206286 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.208750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" event={"ID":"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701","Type":"ContainerStarted","Data":"8b7fbd3ecd0f8a22aca744b0d1612ac905dad5af2422529ce7f0966998b96845"} Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.209005 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.707562495 +0000 UTC m=+144.191249538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.209135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.209788 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.709778009 +0000 UTC m=+144.193465062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.213066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q5rqx" event={"ID":"999ae037-c517-4444-8a98-29fe9b8f937d","Type":"ContainerStarted","Data":"8e3d2817aa605c58af9d2f845f3c92bf38c442e8ad3aaf650d6ce397736d425c"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.224906 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" event={"ID":"d5ebf425-82c6-42e9-acee-d09295b00d77","Type":"ContainerStarted","Data":"f7bed46e38f0fabe210bf87940467a5b488a6e287aaafc5a45a16fd7ddc87ce4"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.226667 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" event={"ID":"3ac94253-4c9b-4dbf-83a5-e582349bbac5","Type":"ContainerStarted","Data":"3015ee4f19c38045ff2da75c878ec3848baa02b67192a6eea341280ca7a32689"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.244162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" event={"ID":"9317898f-297f-49d2-b0ae-811986544686","Type":"ContainerStarted","Data":"50bb11baf8a4c9413bda7f3deab8bbe3b8079a2c85e188da8f9b3d4b157c512d"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.249958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" event={"ID":"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9","Type":"ContainerStarted","Data":"998f2e3358fd617b339f53da69f6b2569de467013a4a7f6952a4163850731126"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.250007 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" event={"ID":"ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9","Type":"ContainerStarted","Data":"c5b9372c696c982e7ac62cd69ea808626ac6ebe53466d5848a519db8e261a988"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.251863 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.254277 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tsvt8" podStartSLOduration=122.254264486 podStartE2EDuration="2m2.254264486s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.252209017 +0000 UTC m=+143.735896060" watchObservedRunningTime="2026-01-27 00:08:18.254264486 +0000 UTC m=+143.737951529" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.258039 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" event={"ID":"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5","Type":"ContainerStarted","Data":"bae51c6dfc970391b6a9fa2196fb36fd91f3b389ad2ba0c8a3a1408daa15157e"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.270627 4786 patch_prober.go:28] interesting pod/console-operator-58897d9998-vbbdx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.270677 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" podUID="ce6edcfe-c4c0-4c1c-9c50-6eb57938f8f9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.275225 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" event={"ID":"5c2aae55-7128-4ccc-bcff-ca7775e8035a","Type":"ContainerStarted","Data":"8a3aad85d24360a9a6b0f953544d2662088b81d374c78ffeafab11add653bba3"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.288124 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" event={"ID":"0c69ee0f-6cee-421a-a605-ec0b946a22c6","Type":"ContainerStarted","Data":"8650f3b86f5dbeb001cace8fb2a38bf2705652e7e837e45b8e7140925d32d01c"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.291080 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" event={"ID":"3b6d855b-f5f5-4405-b9f1-f4852f66e042","Type":"ContainerStarted","Data":"e4b2b3ea5e3187f59937fda478530796c345fca8a30a93fb9355da9bf4184eca"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.294146 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" event={"ID":"f1df1f90-69b0-4b9f-b262-6c569b002d6c","Type":"ContainerStarted","Data":"3847b1c5a6fe88cfbd6fc93715a4e06a760583ce5cfcd43b6897e1a81d930348"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.304372 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" event={"ID":"aa8dd718-bd04-4679-a651-816b958567b5","Type":"ContainerStarted","Data":"08737bfda500f1292ae3328ed7c97655584157219be104bff08c6b876a95e716"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.308888 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" event={"ID":"760d71b7-4d63-4653-a096-ce45c0d35505","Type":"ContainerStarted","Data":"4b53a168b21489c73e6db9ae1d34696fd5aea45e2ce644409e4cd38cb4c06492"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.311273 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.312838 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.812821734 +0000 UTC m=+144.296508767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.312941 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" event={"ID":"adbddbdc-7896-452b-aa1e-e15fae2287ba","Type":"ContainerStarted","Data":"05d349c1dbb4da1602a142a3bcccc645ca02b3cef2ed5884d76c5ddae9bbaa50"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.317629 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" event={"ID":"d99a778d-271c-476f-a9d5-99b7e8445056","Type":"ContainerStarted","Data":"24a272ef3db5d1699d8f9ae83a62e58da1f0df5897be0a6ed0eef6e9efb95f2c"} Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.385706 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fmqkz"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.410299 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.413474 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.414284 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.423660 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:18.923642961 +0000 UTC m=+144.407330004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.460522 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.506476 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.515882 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.516359 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.016343572 +0000 UTC m=+144.500030615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.549457 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gqmm8"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.565625 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.566012 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.571149 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4dzh4"] Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.577251 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" podStartSLOduration=122.577236557 podStartE2EDuration="2m2.577236557s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.542142147 +0000 UTC m=+144.025829190" watchObservedRunningTime="2026-01-27 00:08:18.577236557 +0000 UTC m=+144.060923600" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.581653 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9f82m"] Jan 27 00:08:18 crc kubenswrapper[4786]: W0127 00:08:18.600730 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode411ada2_c02e_4f63_887a_e062bfacf5cd.slice/crio-e982d8eb0cbfc9506e4e9367948bc048917075d4c8587fecaa8c88ef38686414 WatchSource:0}: Error finding container e982d8eb0cbfc9506e4e9367948bc048917075d4c8587fecaa8c88ef38686414: Status 404 returned error can't find the container with id e982d8eb0cbfc9506e4e9367948bc048917075d4c8587fecaa8c88ef38686414 Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.622173 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.622542 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.122530137 +0000 UTC m=+144.606217180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: W0127 00:08:18.686100 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74919585_4bc6_4192_8f7d_6648c409a909.slice/crio-bc7415b93d1c0472e2385657b3fb148d86253ff2520cf823bac60d3e06625421 WatchSource:0}: Error finding container bc7415b93d1c0472e2385657b3fb148d86253ff2520cf823bac60d3e06625421: Status 404 returned error can't find the container with id bc7415b93d1c0472e2385657b3fb148d86253ff2520cf823bac60d3e06625421 Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.725031 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.725477 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.225460389 +0000 UTC m=+144.709147432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.769288 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cvbwh" podStartSLOduration=122.769270497 podStartE2EDuration="2m2.769270497s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.727093155 +0000 UTC m=+144.210780198" watchObservedRunningTime="2026-01-27 00:08:18.769270497 +0000 UTC m=+144.252957540" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.826369 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.826827 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.326807146 +0000 UTC m=+144.810494259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.900952 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" podStartSLOduration=122.900930628 podStartE2EDuration="2m2.900930628s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.899634481 +0000 UTC m=+144.383321534" watchObservedRunningTime="2026-01-27 00:08:18.900930628 +0000 UTC m=+144.384617671" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.927298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:18 crc kubenswrapper[4786]: E0127 00:08:18.927708 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.42768963 +0000 UTC m=+144.911376673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.930830 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hvzq4" podStartSLOduration=122.930811899 podStartE2EDuration="2m2.930811899s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.929158342 +0000 UTC m=+144.412845385" watchObservedRunningTime="2026-01-27 00:08:18.930811899 +0000 UTC m=+144.414498942" Jan 27 00:08:18 crc kubenswrapper[4786]: I0127 00:08:18.977141 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r4jw2" podStartSLOduration=122.977120648 podStartE2EDuration="2m2.977120648s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:18.976007516 +0000 UTC m=+144.459694579" watchObservedRunningTime="2026-01-27 00:08:18.977120648 +0000 UTC m=+144.460807701" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.026700 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" podStartSLOduration=123.02668351 podStartE2EDuration="2m3.02668351s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.024778946 +0000 UTC m=+144.508466009" watchObservedRunningTime="2026-01-27 00:08:19.02668351 +0000 UTC m=+144.510370553" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.032894 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.033300 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.533288378 +0000 UTC m=+145.016975421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.133938 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.134357 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.634342637 +0000 UTC m=+145.118029680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.235622 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.235977 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.735962862 +0000 UTC m=+145.219649905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.328084 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" event={"ID":"4dc22252-c749-471b-9eb2-71bc3e451652","Type":"ContainerStarted","Data":"caf8465f75131c348bb64f2925e80b01f15e9200a7b1e5325f1f887e78cb6c9a"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.328144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" event={"ID":"4dc22252-c749-471b-9eb2-71bc3e451652","Type":"ContainerStarted","Data":"61f5ee97de45930749577052cf464c45747fbac65d46dd3318e20d12078843d3"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.328431 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.331905 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" event={"ID":"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701","Type":"ContainerStarted","Data":"7d5075b3e7ef959b1b0726f1cf250f68a60b750dd3ea12be55d6c03d0370f586"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.331941 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" event={"ID":"ee5b17d1-5ae6-4450-aa28-ae9c5aa6d701","Type":"ContainerStarted","Data":"ae2d9d703e46ec7c4bb45538c46d0988ada1671d5cdde300a4ab4424384f420e"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.334295 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" event={"ID":"496385e4-0cf2-469b-bfab-08c939c34912","Type":"ContainerStarted","Data":"64b5f94baf8ba0277cacec8f84745a442df1f79a5555d344b87b82e3aec30276"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.336894 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.336996 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.836981199 +0000 UTC m=+145.320668232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.337140 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.337503 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.837494144 +0000 UTC m=+145.321181187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.338801 4786 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7vzpl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.338847 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" podUID="4dc22252-c749-471b-9eb2-71bc3e451652" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.367795 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" podStartSLOduration=123.367747636 podStartE2EDuration="2m3.367747636s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.344856714 +0000 UTC m=+144.828543757" watchObservedRunningTime="2026-01-27 00:08:19.367747636 +0000 UTC m=+144.851434679" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.368035 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hvjmw" podStartSLOduration=123.368031524 podStartE2EDuration="2m3.368031524s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.366645125 +0000 UTC m=+144.850332168" watchObservedRunningTime="2026-01-27 00:08:19.368031524 +0000 UTC m=+144.851718567" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.375010 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" event={"ID":"f1df1f90-69b0-4b9f-b262-6c569b002d6c","Type":"ContainerStarted","Data":"94dc9e66ae80355e7a419a01768b9c485a39bcc4fcc670663768008c9150bfe3"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.375046 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" event={"ID":"f1df1f90-69b0-4b9f-b262-6c569b002d6c","Type":"ContainerStarted","Data":"e566dfbfbf9e91cdaec8e4f634c3585ac123bd18002698ed6f66f7b0b461b1ef"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.377033 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9f82m" event={"ID":"74919585-4bc6-4192-8f7d-6648c409a909","Type":"ContainerStarted","Data":"bc7415b93d1c0472e2385657b3fb148d86253ff2520cf823bac60d3e06625421"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.433509 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" event={"ID":"1f067e87-dade-4a5b-b69c-cc135c2b6894","Type":"ContainerStarted","Data":"39ee35b4681b782c07d4a2ea4b79cba13799bf5819a60414f5719c26bd438583"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.442126 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.442227 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.942210507 +0000 UTC m=+145.425897550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.448542 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.458366 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7pgkx" podStartSLOduration=123.458345377 podStartE2EDuration="2m3.458345377s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.391376999 +0000 UTC m=+144.875064052" watchObservedRunningTime="2026-01-27 00:08:19.458345377 +0000 UTC m=+144.942032420" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.462091 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" event={"ID":"3ac94253-4c9b-4dbf-83a5-e582349bbac5","Type":"ContainerStarted","Data":"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.464068 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.469821 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:19.969802253 +0000 UTC m=+145.453489296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.472499 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rgfxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.472829 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.490230 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-66gz5" podStartSLOduration=123.490209335 podStartE2EDuration="2m3.490209335s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.461849787 +0000 UTC m=+144.945536830" watchObservedRunningTime="2026-01-27 00:08:19.490209335 +0000 UTC m=+144.973896378" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.500788 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" event={"ID":"1ea5851a-23bf-437d-8378-bd23d83d1ed0","Type":"ContainerStarted","Data":"5592229d14dfd8eda517f089c44c490bdee5c285a88741e538cea26b537a24fd"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.500838 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" event={"ID":"1ea5851a-23bf-437d-8378-bd23d83d1ed0","Type":"ContainerStarted","Data":"10d9d4a4035906d91f698ba6b7b2a1c3e19a4677f851d21232ef5d55744cf512"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.505863 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" event={"ID":"b58be9f2-cdec-47b7-8400-62f800b6dbe1","Type":"ContainerStarted","Data":"e8d1947a35823171fdc32708d6a19c67fefceeea6bebc8230a6393099d475960"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.506671 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.509301 4786 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s9c88 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.509344 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" podUID="b58be9f2-cdec-47b7-8400-62f800b6dbe1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.511701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q5rqx" event={"ID":"999ae037-c517-4444-8a98-29fe9b8f937d","Type":"ContainerStarted","Data":"7d93f2649042f7e88238a12ae1cda6a6cf59b79061a22ee4fc383a9b8fb8eca5"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.516831 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" event={"ID":"39b2300e-019c-4184-bab7-80bace36ad38","Type":"ContainerStarted","Data":"ab5b648ca8cc3342fc85a2cb609f922af9675ae8cc9f08100437318faed151b8"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.516882 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" event={"ID":"39b2300e-019c-4184-bab7-80bace36ad38","Type":"ContainerStarted","Data":"c539b33ca07a5ad35a6b707d124d1d58ef196ee0780187eafe97341407a5a042"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.527800 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" podStartSLOduration=123.527777655 podStartE2EDuration="2m3.527777655s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.519286523 +0000 UTC m=+145.002973566" watchObservedRunningTime="2026-01-27 00:08:19.527777655 +0000 UTC m=+145.011464698" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.528838 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" podStartSLOduration=123.528832315 podStartE2EDuration="2m3.528832315s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.488765243 +0000 UTC m=+144.972452286" watchObservedRunningTime="2026-01-27 00:08:19.528832315 +0000 UTC m=+145.012519358" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.537599 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" event={"ID":"9317898f-297f-49d2-b0ae-811986544686","Type":"ContainerStarted","Data":"4a8016a1a4524500f4ed2496d7d82628c9cbea90a26d711238cdb5db9dc6912b"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.538386 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.539112 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fkv4p" podStartSLOduration=123.539092877 podStartE2EDuration="2m3.539092877s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.537979635 +0000 UTC m=+145.021666688" watchObservedRunningTime="2026-01-27 00:08:19.539092877 +0000 UTC m=+145.022779920" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.550462 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.551077 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.051055368 +0000 UTC m=+145.534742411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.551462 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.555045 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.055027781 +0000 UTC m=+145.538714914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.562624 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7x9t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.562698 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.571211 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-q5rqx" podStartSLOduration=6.571184021 podStartE2EDuration="6.571184021s" podCreationTimestamp="2026-01-27 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.565914251 +0000 UTC m=+145.049601304" watchObservedRunningTime="2026-01-27 00:08:19.571184021 +0000 UTC m=+145.054871064" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.571639 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" event={"ID":"aa8dd718-bd04-4679-a651-816b958567b5","Type":"ContainerStarted","Data":"2bdecc35be14196fa1752db5bcd49fdcda98555ef2774037f89ca2d4aa1e3245"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.582401 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" event={"ID":"1e618028-3c51-4238-b319-f43268fb9d03","Type":"ContainerStarted","Data":"0c8e47dcc87d425485df20bf691f325e82469a98b5753250ae439bbd7d8f5374"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.596982 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" podStartSLOduration=123.596963656 podStartE2EDuration="2m3.596963656s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.596035629 +0000 UTC m=+145.079722672" watchObservedRunningTime="2026-01-27 00:08:19.596963656 +0000 UTC m=+145.080650699" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.598510 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" event={"ID":"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b","Type":"ContainerStarted","Data":"18643cbd713425866dfc76b648bb013fc4d82a623e2189993898c3576998fb65"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.603336 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" event={"ID":"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5","Type":"ContainerStarted","Data":"e2898fc3fe5f39829cb7b3a4ff8e05bcaed2c65c4ef6de9c24c5517087d98578"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.603380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" event={"ID":"dcbbdab3-a219-4b1d-b29d-0ad59edbb0d5","Type":"ContainerStarted","Data":"483e295a54af11b22e394554bfa04bb9654e5536491ac8d622d63c563b64ef0d"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.620630 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" event={"ID":"0c69ee0f-6cee-421a-a605-ec0b946a22c6","Type":"ContainerStarted","Data":"54d8faf26e7b1674021deb5a173f2b188f5714bb464f816353cb828687cdb469"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.620680 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" event={"ID":"0c69ee0f-6cee-421a-a605-ec0b946a22c6","Type":"ContainerStarted","Data":"aa5cb4bb2fef2e668a7614c748dec5cfb3b27b21fa5edbe558c1450d7b210994"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.658347 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.659458 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.159365163 +0000 UTC m=+145.643052206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.667968 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5gz28" podStartSLOduration=123.667954328 podStartE2EDuration="2m3.667954328s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.627170856 +0000 UTC m=+145.110857899" watchObservedRunningTime="2026-01-27 00:08:19.667954328 +0000 UTC m=+145.151641371" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.669613 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" podStartSLOduration=123.669603495 podStartE2EDuration="2m3.669603495s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.667796903 +0000 UTC m=+145.151483936" watchObservedRunningTime="2026-01-27 00:08:19.669603495 +0000 UTC m=+145.153290538" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.671455 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" event={"ID":"fe040435-1925-40cd-9c29-9ed97e57516c","Type":"ContainerStarted","Data":"55b316845d5577ff0279f65925727a1fd495d2dbd30b1a6c8b46338286e2c1d8"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.671481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" event={"ID":"fe040435-1925-40cd-9c29-9ed97e57516c","Type":"ContainerStarted","Data":"993156a04c775f43f87a9f77eac62904e64661531bdb74b613cdb8be56836093"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.672025 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.680649 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" event={"ID":"3b6d855b-f5f5-4405-b9f1-f4852f66e042","Type":"ContainerStarted","Data":"f1f490c350cab0bc474b1c60f724d8acb57c08bf65a4536956d77fb510a1efea"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.682606 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" event={"ID":"f68d0d18-9904-432a-bcf6-8791e8a2fee0","Type":"ContainerStarted","Data":"945c7d7bcc1b434d420a8f069cee48d07b8b6768697d103257b71c94a8a9e0f4"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.682625 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" event={"ID":"f68d0d18-9904-432a-bcf6-8791e8a2fee0","Type":"ContainerStarted","Data":"f18d448d48a7666d8c6986c1043435b127991156cec5afc5fba64b455af9bde7"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.683957 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" event={"ID":"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a","Type":"ContainerStarted","Data":"d5cb232de60c0d5152ac3d0c9b402d2a2d9744b53c1645727ffef29bf92577bc"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.683974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" event={"ID":"e10fe708-13fe-4f4e-8a85-7ad989a1ac6a","Type":"ContainerStarted","Data":"9810cfb5c3580f9d28f024c7a7182745d5c04d640377474ea61cfe8686990490"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.684601 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.687864 4786 generic.go:334] "Generic (PLEG): container finished" podID="a1a65c06-00bf-475f-bf73-b09e13502a7c" containerID="ab31081543e3bd55ffd2849bb977ed353b97e299ad56959dac6e821157c6908b" exitCode=0 Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.687911 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" event={"ID":"a1a65c06-00bf-475f-bf73-b09e13502a7c","Type":"ContainerDied","Data":"ab31081543e3bd55ffd2849bb977ed353b97e299ad56959dac6e821157c6908b"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.690856 4786 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hqsbf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.690895 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" podUID="e10fe708-13fe-4f4e-8a85-7ad989a1ac6a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.691776 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qc8x5" event={"ID":"f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb","Type":"ContainerStarted","Data":"fff52a7876831e42d4a7874cb17e11b088c2878a05a01a82149683c84d73a2d8"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.701733 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5zhf8" podStartSLOduration=123.70171856 podStartE2EDuration="2m3.70171856s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.701208005 +0000 UTC m=+145.184895048" watchObservedRunningTime="2026-01-27 00:08:19.70171856 +0000 UTC m=+145.185405603" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.715078 4786 generic.go:334] "Generic (PLEG): container finished" podID="760d71b7-4d63-4653-a096-ce45c0d35505" containerID="309f6c092ac0672b9392713c70947659def3b3dd718fc66317d6179c398689d6" exitCode=0 Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.715348 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" event={"ID":"760d71b7-4d63-4653-a096-ce45c0d35505","Type":"ContainerDied","Data":"309f6c092ac0672b9392713c70947659def3b3dd718fc66317d6179c398689d6"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.725537 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" event={"ID":"adbddbdc-7896-452b-aa1e-e15fae2287ba","Type":"ContainerStarted","Data":"4a4042465f1baa30c1eff3e4edbbdd17f3223d1142ea5ace2c656bb56f9cf8b8"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.726734 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zp9v5" podStartSLOduration=123.726721512 podStartE2EDuration="2m3.726721512s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.724718175 +0000 UTC m=+145.208405228" watchObservedRunningTime="2026-01-27 00:08:19.726721512 +0000 UTC m=+145.210408555" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.745108 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" podStartSLOduration=123.745088705 podStartE2EDuration="2m3.745088705s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.744778506 +0000 UTC m=+145.228465569" watchObservedRunningTime="2026-01-27 00:08:19.745088705 +0000 UTC m=+145.228775748" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.751438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4dzh4" event={"ID":"e411ada2-c02e-4f63-887a-e062bfacf5cd","Type":"ContainerStarted","Data":"bac9b617222ea715f73977769c689bb2ec1bbccb5803d80994769cf0bc21ad19"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.751668 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4dzh4" event={"ID":"e411ada2-c02e-4f63-887a-e062bfacf5cd","Type":"ContainerStarted","Data":"e982d8eb0cbfc9506e4e9367948bc048917075d4c8587fecaa8c88ef38686414"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.760253 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.762304 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.262292325 +0000 UTC m=+145.745979368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.782073 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" event={"ID":"ffd5c7d2-0fdb-4d75-aa79-7ff7db56b824","Type":"ContainerStarted","Data":"d03a0326cf60da73217320f6e9ff0a7ce7c0ba16c1bc012e4e27882f0f86f84c"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.791217 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.801542 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" event={"ID":"d5ebf425-82c6-42e9-acee-d09295b00d77","Type":"ContainerStarted","Data":"12ef346d7346c6799308dfbe8658de89d82c0a37107a647d2d07324a702fb63e"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.805847 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" event={"ID":"694929a3-e8eb-43f6-822b-58d265787d3e","Type":"ContainerStarted","Data":"4ce29544b03efe18d32b515db4cb61c73b76179d9d30a3f89a0496c6db3b705d"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.805890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" event={"ID":"694929a3-e8eb-43f6-822b-58d265787d3e","Type":"ContainerStarted","Data":"96bbe5685a0e3e4c0c14c7241ba45d8050666dd6118e6561650ba386483a4e4b"} Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.807803 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.808217 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-hvzq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.808248 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hvzq4" podUID="d36adcbb-ecba-4efc-90cf-44471fcd3ce4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.810099 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" podStartSLOduration=123.810079247 podStartE2EDuration="2m3.810079247s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.80141113 +0000 UTC m=+145.285098183" watchObservedRunningTime="2026-01-27 00:08:19.810079247 +0000 UTC m=+145.293766280" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.814535 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vbbdx" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.813890 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:19 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:19 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:19 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.821417 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.831933 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qc8x5" podStartSLOduration=123.831911329 podStartE2EDuration="2m3.831911329s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.826406282 +0000 UTC m=+145.310093325" watchObservedRunningTime="2026-01-27 00:08:19.831911329 +0000 UTC m=+145.315598372" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.832998 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.864229 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.865645 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.365629079 +0000 UTC m=+145.849316122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.879093 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-865v4" podStartSLOduration=123.879072552 podStartE2EDuration="2m3.879072552s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.855423218 +0000 UTC m=+145.339110261" watchObservedRunningTime="2026-01-27 00:08:19.879072552 +0000 UTC m=+145.362759585" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.914067 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" podStartSLOduration=123.914049449 podStartE2EDuration="2m3.914049449s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.880915195 +0000 UTC m=+145.364602248" watchObservedRunningTime="2026-01-27 00:08:19.914049449 +0000 UTC m=+145.397736492" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.949402 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvxx5" podStartSLOduration=123.949385145 podStartE2EDuration="2m3.949385145s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.922412337 +0000 UTC m=+145.406099380" watchObservedRunningTime="2026-01-27 00:08:19.949385145 +0000 UTC m=+145.433072188" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.949836 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" podStartSLOduration=123.949829638 podStartE2EDuration="2m3.949829638s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:19.94779121 +0000 UTC m=+145.431478253" watchObservedRunningTime="2026-01-27 00:08:19.949829638 +0000 UTC m=+145.433516681" Jan 27 00:08:19 crc kubenswrapper[4786]: I0127 00:08:19.967012 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:19 crc kubenswrapper[4786]: E0127 00:08:19.969735 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.469724254 +0000 UTC m=+145.953411297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.057527 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f96jt" podStartSLOduration=124.057509955 podStartE2EDuration="2m4.057509955s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.054777897 +0000 UTC m=+145.538464940" watchObservedRunningTime="2026-01-27 00:08:20.057509955 +0000 UTC m=+145.541196998" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.068022 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.068147 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.568128158 +0000 UTC m=+146.051815201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.068257 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.068648 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.568640152 +0000 UTC m=+146.052327195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.078659 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.078706 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.085280 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.170328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.170672 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.670655168 +0000 UTC m=+146.154342211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.170822 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.171196 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.671169613 +0000 UTC m=+146.154856656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.185372 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pmhwf" podStartSLOduration=124.185352617 podStartE2EDuration="2m4.185352617s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.183732591 +0000 UTC m=+145.667419634" watchObservedRunningTime="2026-01-27 00:08:20.185352617 +0000 UTC m=+145.669039660" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.185875 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4dzh4" podStartSLOduration=7.185870272 podStartE2EDuration="7.185870272s" podCreationTimestamp="2026-01-27 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.142952869 +0000 UTC m=+145.626639912" watchObservedRunningTime="2026-01-27 00:08:20.185870272 +0000 UTC m=+145.669557315" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.213979 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" podStartSLOduration=124.213947812 podStartE2EDuration="2m4.213947812s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.211104411 +0000 UTC m=+145.694791454" watchObservedRunningTime="2026-01-27 00:08:20.213947812 +0000 UTC m=+145.697634855" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.271872 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.272171 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.77215297 +0000 UTC m=+146.255840003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.272314 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.272931 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.772922662 +0000 UTC m=+146.256609695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.345472 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.345524 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.375028 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.376532 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.876504902 +0000 UTC m=+146.360191945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.476430 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.476730 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:20.976714916 +0000 UTC m=+146.460401959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.577464 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.577674 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.077654322 +0000 UTC m=+146.561341365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.678204 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.678472 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.178459783 +0000 UTC m=+146.662146826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.778923 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.779086 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.279061209 +0000 UTC m=+146.762748252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.779169 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.779474 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.279467781 +0000 UTC m=+146.763154824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.794069 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:20 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:20 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:20 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.794113 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.810911 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" event={"ID":"760d71b7-4d63-4653-a096-ce45c0d35505","Type":"ContainerStarted","Data":"06ce7fe4f5a4963c4a63ead6ab2d4abb9b5c6b2fa14051b584811dab741b203d"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.810953 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" event={"ID":"760d71b7-4d63-4653-a096-ce45c0d35505","Type":"ContainerStarted","Data":"8754ea14012bb7cbb9126f084a716c1a8aabae10b81bc1d2e8e963f9f111b7a2"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.813839 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" event={"ID":"b58be9f2-cdec-47b7-8400-62f800b6dbe1","Type":"ContainerStarted","Data":"04d8ebfedb6d3fdf6cfb2d057c782ca7340b39b46076b0d5dc618711dcdcdef7"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.814719 4786 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s9c88 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.814754 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" podUID="b58be9f2-cdec-47b7-8400-62f800b6dbe1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.816206 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" event={"ID":"1e618028-3c51-4238-b319-f43268fb9d03","Type":"ContainerStarted","Data":"b48c2113aa81370414050666d8e2b9715f0d2c10cb800b2a1ddcdf3d003a105e"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.818809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9f82m" event={"ID":"74919585-4bc6-4192-8f7d-6648c409a909","Type":"ContainerStarted","Data":"ec0ee44e8ce5f2145ef531588a7a3f3bbb8ee2d347e504c325016360f64e019c"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.819040 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9f82m" event={"ID":"74919585-4bc6-4192-8f7d-6648c409a909","Type":"ContainerStarted","Data":"a4cbc5b3ca506150fdb97cc06f3c89be5959da8d36240784f3e40b6543b35dea"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.819515 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.820982 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8pgnx" event={"ID":"a321b23e-78f2-4c8b-a50a-75fdde6a4d8b","Type":"ContainerStarted","Data":"ec06f96c7f34dd6cbc55de101abdbcc2f381fa31f4218e5f4330d0ce966539e1"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.825373 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" event={"ID":"fe040435-1925-40cd-9c29-9ed97e57516c","Type":"ContainerStarted","Data":"20193af0561a1540f37f059221d12eab100d28da3cffbfd3b2b7552e8244556e"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.827459 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fmqkz" event={"ID":"694929a3-e8eb-43f6-822b-58d265787d3e","Type":"ContainerStarted","Data":"ece892fd58480831b0daa940e3f988d2f46d0eb9ae1786f89d697ae70ac10ae0"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.830118 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l42qz" event={"ID":"3b6d855b-f5f5-4405-b9f1-f4852f66e042","Type":"ContainerStarted","Data":"0ceffd2777eadc7b8c3d42669e7285bb6027fa7b90123415ccae062c6a479884"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.832896 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" event={"ID":"a1a65c06-00bf-475f-bf73-b09e13502a7c","Type":"ContainerStarted","Data":"1d081b8a06ec87441cdb24b63cc15df1bf1aa2914f6ba161961583ae476178f5"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.833017 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.839299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" event={"ID":"1f067e87-dade-4a5b-b69c-cc135c2b6894","Type":"ContainerStarted","Data":"7a6b0bcf2e78054fb11327065921cd588991659387392faddecfa74c1ead977d"} Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.842645 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-hvzq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.842652 4786 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7vzpl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.842693 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hvzq4" podUID="d36adcbb-ecba-4efc-90cf-44471fcd3ce4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.842702 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" podUID="4dc22252-c749-471b-9eb2-71bc3e451652" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.843380 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rgfxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" start-of-body= Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.843400 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.850562 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.851969 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ggm8j" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.856790 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" podStartSLOduration=124.856775653 podStartE2EDuration="2m4.856775653s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.850029611 +0000 UTC m=+146.333716654" watchObservedRunningTime="2026-01-27 00:08:20.856775653 +0000 UTC m=+146.340462696" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.881278 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.881455 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.381434896 +0000 UTC m=+146.865121949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.882199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.882956 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.382939988 +0000 UTC m=+146.866627031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.886802 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" podStartSLOduration=124.886781968 podStartE2EDuration="2m4.886781968s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.883365391 +0000 UTC m=+146.367052434" watchObservedRunningTime="2026-01-27 00:08:20.886781968 +0000 UTC m=+146.370469011" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.919280 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzmgs" podStartSLOduration=124.919264323 podStartE2EDuration="2m4.919264323s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.903068912 +0000 UTC m=+146.386755955" watchObservedRunningTime="2026-01-27 00:08:20.919264323 +0000 UTC m=+146.402951366" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.968298 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9f82m" podStartSLOduration=7.96828273 podStartE2EDuration="7.96828273s" podCreationTimestamp="2026-01-27 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:20.937915995 +0000 UTC m=+146.421603038" watchObservedRunningTime="2026-01-27 00:08:20.96828273 +0000 UTC m=+146.451969773" Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.994965 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.995247 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.495230437 +0000 UTC m=+146.978917480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:20 crc kubenswrapper[4786]: I0127 00:08:20.995516 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:20 crc kubenswrapper[4786]: E0127 00:08:20.995904 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.495888416 +0000 UTC m=+146.979575459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.096760 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.096932 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.596906064 +0000 UTC m=+147.080593107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.200372 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.200882 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.700864445 +0000 UTC m=+147.184551488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.301386 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.301548 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.801523583 +0000 UTC m=+147.285210626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.301675 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.301947 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.801932094 +0000 UTC m=+147.285619137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.402237 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.402429 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.902400236 +0000 UTC m=+147.386087279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.402511 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.402791 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:21.902779387 +0000 UTC m=+147.386466430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.503063 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.503193 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.003168797 +0000 UTC m=+147.486855840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.503621 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.503907 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.003895248 +0000 UTC m=+147.487582291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.513199 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hqsbf" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.604160 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.604296 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.104270897 +0000 UTC m=+147.587957940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.604348 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.604756 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.104739771 +0000 UTC m=+147.588426814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.706030 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.706234 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.206207991 +0000 UTC m=+147.689895034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.706391 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.706724 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.206710865 +0000 UTC m=+147.690397908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.747817 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.747871 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.749525 4786 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7k2vr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.749599 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" podUID="760d71b7-4d63-4653-a096-ce45c0d35505" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.792474 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:21 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:21 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:21 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.792520 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.806932 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.807085 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.307057204 +0000 UTC m=+147.790744247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.807360 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.807679 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.307667561 +0000 UTC m=+147.791354604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.876377 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" event={"ID":"1e618028-3c51-4238-b319-f43268fb9d03","Type":"ContainerStarted","Data":"9317bc9eb1452503f374a3282869fc064ac4aba515bc7d2213d6926b59214239"} Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.876720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" event={"ID":"1e618028-3c51-4238-b319-f43268fb9d03","Type":"ContainerStarted","Data":"aa1c03bb046799ed17c0f8ff45d2bf0d21f2af9e170aaccdcfb601c3e668415e"} Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.880198 4786 generic.go:334] "Generic (PLEG): container finished" podID="1ea5851a-23bf-437d-8378-bd23d83d1ed0" containerID="5592229d14dfd8eda517f089c44c490bdee5c285a88741e538cea26b537a24fd" exitCode=0 Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.880236 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" event={"ID":"1ea5851a-23bf-437d-8378-bd23d83d1ed0","Type":"ContainerDied","Data":"5592229d14dfd8eda517f089c44c490bdee5c285a88741e538cea26b537a24fd"} Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.886064 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9c88" Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.911416 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.911508 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.411487609 +0000 UTC m=+147.895174662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:21 crc kubenswrapper[4786]: I0127 00:08:21.912475 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:21 crc kubenswrapper[4786]: E0127 00:08:21.913077 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.413067804 +0000 UTC m=+147.896754847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.016426 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.016647 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.516621044 +0000 UTC m=+148.000308087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.117750 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.118152 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.618131786 +0000 UTC m=+148.101818829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.219195 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.219536 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.719511414 +0000 UTC m=+148.203198457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.219862 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.220200 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.720185153 +0000 UTC m=+148.203872196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.267256 4786 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.291270 4786 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T00:08:22.267284495Z","Handler":null,"Name":""} Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.320607 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.321013 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.820983724 +0000 UTC m=+148.304670767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.321336 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: E0127 00:08:22.321666 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 00:08:22.821658724 +0000 UTC m=+148.305345757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc7zp" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.341263 4786 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.341313 4786 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.422390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.437988 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.523837 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.539739 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.539777 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.580778 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.581645 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.594358 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc7zp\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.598131 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.606691 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.624536 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.624650 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftcz\" (UniqueName: \"kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.624673 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.725166 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wftcz\" (UniqueName: \"kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.725205 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.725253 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.725662 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.726071 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.742908 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wftcz\" (UniqueName: \"kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz\") pod \"certified-operators-sgprl\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.763533 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.783529 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.785042 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.792358 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:22 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:22 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:22 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.792397 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.792825 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.801415 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.844987 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdz5s" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.892523 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" event={"ID":"1e618028-3c51-4238-b319-f43268fb9d03","Type":"ContainerStarted","Data":"b069549f37016a1329e9375c95675e4dfc4207df0e21dafa5c2f03ffe6dc10b9"} Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.903821 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.922497 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gqmm8" podStartSLOduration=9.922479389 podStartE2EDuration="9.922479389s" podCreationTimestamp="2026-01-27 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:22.917282981 +0000 UTC m=+148.400970024" watchObservedRunningTime="2026-01-27 00:08:22.922479389 +0000 UTC m=+148.406166432" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.933227 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.933311 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:22 crc kubenswrapper[4786]: I0127 00:08:22.933365 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k9wl\" (UniqueName: \"kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.015774 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.016983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.023826 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.034396 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k9wl\" (UniqueName: \"kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.034544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.034684 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.036483 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.038061 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.061880 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k9wl\" (UniqueName: \"kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl\") pod \"community-operators-npglr\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.137000 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2wzf\" (UniqueName: \"kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.137047 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.137107 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.139507 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.170973 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.176101 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.177505 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.188454 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238105 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238330 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238433 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2wzf\" (UniqueName: \"kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238614 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238744 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.238838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.239506 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.239973 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.243121 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.244676 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.244948 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.247287 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.259168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2wzf\" (UniqueName: \"kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf\") pod \"certified-operators-ml494\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.264918 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.277918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.350208 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.350596 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8b29\" (UniqueName: \"kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.350643 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.357962 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.369420 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.371157 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452290 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume\") pod \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452369 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm799\" (UniqueName: \"kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799\") pod \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452435 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume\") pod \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\" (UID: \"1ea5851a-23bf-437d-8378-bd23d83d1ed0\") " Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452582 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8b29\" (UniqueName: \"kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452626 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.452686 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.453099 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.453208 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume" (OuterVolumeSpecName: "config-volume") pod "1ea5851a-23bf-437d-8378-bd23d83d1ed0" (UID: "1ea5851a-23bf-437d-8378-bd23d83d1ed0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.453518 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.469673 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.480291 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.482378 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1ea5851a-23bf-437d-8378-bd23d83d1ed0" (UID: "1ea5851a-23bf-437d-8378-bd23d83d1ed0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.487805 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8b29\" (UniqueName: \"kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29\") pod \"community-operators-5wmtm\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.496704 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799" (OuterVolumeSpecName: "kube-api-access-tm799") pod "1ea5851a-23bf-437d-8378-bd23d83d1ed0" (UID: "1ea5851a-23bf-437d-8378-bd23d83d1ed0"). InnerVolumeSpecName "kube-api-access-tm799". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.505294 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.554060 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm799\" (UniqueName: \"kubernetes.io/projected/1ea5851a-23bf-437d-8378-bd23d83d1ed0-kube-api-access-tm799\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.554082 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ea5851a-23bf-437d-8378-bd23d83d1ed0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.554090 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ea5851a-23bf-437d-8378-bd23d83d1ed0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.741810 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:08:23 crc kubenswrapper[4786]: W0127 00:08:23.793616 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fe5646_e88c_4b2e_9808_6550f7d9947c.slice/crio-b759aabebe80cae39165fb66ba8b483a4c87c4dc9700df2b56d47cd33eacf256 WatchSource:0}: Error finding container b759aabebe80cae39165fb66ba8b483a4c87c4dc9700df2b56d47cd33eacf256: Status 404 returned error can't find the container with id b759aabebe80cae39165fb66ba8b483a4c87c4dc9700df2b56d47cd33eacf256 Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.796951 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:23 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:23 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:23 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.797002 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.881250 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.918404 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" event={"ID":"1ea5851a-23bf-437d-8378-bd23d83d1ed0","Type":"ContainerDied","Data":"10d9d4a4035906d91f698ba6b7b2a1c3e19a4677f851d21232ef5d55744cf512"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.918441 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d9d4a4035906d91f698ba6b7b2a1c3e19a4677f851d21232ef5d55744cf512" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.918497 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491200-glqtq" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.946283 4786 generic.go:334] "Generic (PLEG): container finished" podID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerID="b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8" exitCode=0 Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.946358 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerDied","Data":"b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.946387 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerStarted","Data":"5d398bf3101daf570310357e9a1fc68d0e71b0ddbea69bcd05bad7fefd83e127"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.949065 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.963356 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d9083b45f8b8e213c648b24e47774ac6cba92dd8b857ba1c8671a5b8021b9414"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.982432 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" event={"ID":"c99c57e7-f694-4e94-aaf4-cefa5df36513","Type":"ContainerStarted","Data":"f6f634a3ce61db9d459be266de8c680acbdf90882639f99b4fcca7940d99b3fe"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.982492 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" event={"ID":"c99c57e7-f694-4e94-aaf4-cefa5df36513","Type":"ContainerStarted","Data":"ff6865fe4438724d1f1a5b4926a61575ba63451db89b610b3bbbb91d341f5293"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.983032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.991287 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerStarted","Data":"b759aabebe80cae39165fb66ba8b483a4c87c4dc9700df2b56d47cd33eacf256"} Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.993769 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:08:23 crc kubenswrapper[4786]: I0127 00:08:23.995875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d2752c27014e5eed6e95394c294324831948fdfb6bc4cc3941767461a5abbfc2"} Jan 27 00:08:24 crc kubenswrapper[4786]: W0127 00:08:24.010270 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd7c9ae_7333_4ef6_adbb_4e7037046a0f.slice/crio-65e1921f9d300e8b0a0d8a5bcb9b801c83fb9d56611d2c455d2cbe4a0c7537ae WatchSource:0}: Error finding container 65e1921f9d300e8b0a0d8a5bcb9b801c83fb9d56611d2c455d2cbe4a0c7537ae: Status 404 returned error can't find the container with id 65e1921f9d300e8b0a0d8a5bcb9b801c83fb9d56611d2c455d2cbe4a0c7537ae Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.012781 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" podStartSLOduration=128.012762797 podStartE2EDuration="2m8.012762797s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:24.010494213 +0000 UTC m=+149.494181256" watchObservedRunningTime="2026-01-27 00:08:24.012762797 +0000 UTC m=+149.496449840" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.371624 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 00:08:24 crc kubenswrapper[4786]: E0127 00:08:24.372334 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5851a-23bf-437d-8378-bd23d83d1ed0" containerName="collect-profiles" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.372366 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5851a-23bf-437d-8378-bd23d83d1ed0" containerName="collect-profiles" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.372616 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5851a-23bf-437d-8378-bd23d83d1ed0" containerName="collect-profiles" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.373404 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.375838 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.376254 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.377063 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.567591 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.567674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.580826 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.581989 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.584278 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.599216 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.669278 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.669363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.669389 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.711897 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.714090 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.770735 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.770835 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.770905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjr7f\" (UniqueName: \"kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.796522 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:24 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:24 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:24 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.796623 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.872501 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjr7f\" (UniqueName: \"kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.872613 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.872674 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.873444 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.873549 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.899621 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjr7f\" (UniqueName: \"kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f\") pod \"redhat-marketplace-wgj2x\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.911052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.979088 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.980349 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:24 crc kubenswrapper[4786]: I0127 00:08:24.992375 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.009989 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.012057 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.012112 4786 patch_prober.go:28] interesting pod/console-f9d7485db-tsvt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.012148 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tsvt8" podUID="e0b3fdf9-0ced-429c-8169-9cf860c9e46e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.015755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"776023ce2dd96cd0e65da53cd1c590a4547b7653cc261dcd5af62bcbfd66157b"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.019281 4786 generic.go:334] "Generic (PLEG): container finished" podID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerID="ed971c17d5718f3d8fdf2836a127e7b1bdde45ac024dcfeb2e9df1970ab55db2" exitCode=0 Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.019337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerDied","Data":"ed971c17d5718f3d8fdf2836a127e7b1bdde45ac024dcfeb2e9df1970ab55db2"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.019354 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerStarted","Data":"65e1921f9d300e8b0a0d8a5bcb9b801c83fb9d56611d2c455d2cbe4a0c7537ae"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.025266 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7ef02b2f4c05f96fde120076b71e8f427a4ecc28c21af44865f6cfbb16f7dafc"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.031298 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b88c782d90530c8206cc8937fcafd477f0b8ab4fe56c8cb3e22473d47ed1af2e"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.031344 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6cc0dddfc97ede15502bfaa9deba9c681f158ed2687346f95a816addad7db00e"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.031919 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.044693 4786 generic.go:334] "Generic (PLEG): container finished" podID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerID="c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c" exitCode=0 Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.044988 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerDied","Data":"c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.051808 4786 generic.go:334] "Generic (PLEG): container finished" podID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerID="64cf09dcbcbe39bcbfb0566353cc9169dc9d8d2dfb0a51cfc1f6bc55ed82f995" exitCode=0 Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.052866 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerDied","Data":"64cf09dcbcbe39bcbfb0566353cc9169dc9d8d2dfb0a51cfc1f6bc55ed82f995"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.053014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerStarted","Data":"fe23f8c5e07ae0c33de8ca3fd2c2d90b79f3b07d0166af214c2d899dbffc7171"} Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.179791 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjwd\" (UniqueName: \"kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.179902 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.179975 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.193799 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.280950 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.281364 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjwd\" (UniqueName: \"kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.281606 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.281907 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.282040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.296406 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjwd\" (UniqueName: \"kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd\") pod \"redhat-marketplace-c76nj\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.352985 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.384178 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:08:25 crc kubenswrapper[4786]: W0127 00:08:25.396451 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6930398b_226e_4cc6_8fbe_5ff39cbe5bab.slice/crio-021db0784f6a4a40fb291f058c397acaf526785afc67e549d1ff4e87fb807b05 WatchSource:0}: Error finding container 021db0784f6a4a40fb291f058c397acaf526785afc67e549d1ff4e87fb807b05: Status 404 returned error can't find the container with id 021db0784f6a4a40fb291f058c397acaf526785afc67e549d1ff4e87fb807b05 Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.595105 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.777090 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.778340 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.781109 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.785404 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.800510 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:25 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:25 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:25 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.800591 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.892034 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.892188 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.892237 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvzv\" (UniqueName: \"kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.993996 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.994047 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbvzv\" (UniqueName: \"kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.994131 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.994691 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:25 crc kubenswrapper[4786]: I0127 00:08:25.994958 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.032864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbvzv\" (UniqueName: \"kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv\") pod \"redhat-operators-4nmjh\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.098052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.147601 4786 generic.go:334] "Generic (PLEG): container finished" podID="647f6a23-ad88-4898-93fb-c19880c9d204" containerID="7adc04610b915e8b3d7098fbceb3c58513ca36cd030222fd88d4c1be69a5e433" exitCode=0 Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.147767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerDied","Data":"7adc04610b915e8b3d7098fbceb3c58513ca36cd030222fd88d4c1be69a5e433"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.147844 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerStarted","Data":"0af4d7315ee12bbf653ea44fd3a01a0b6c4120c075698bf4359836ccc756bd38"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.150104 4786 generic.go:334] "Generic (PLEG): container finished" podID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerID="30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b" exitCode=0 Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.150173 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerDied","Data":"30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.150207 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerStarted","Data":"021db0784f6a4a40fb291f058c397acaf526785afc67e549d1ff4e87fb807b05"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.152222 4786 generic.go:334] "Generic (PLEG): container finished" podID="fd01eda3-4f09-4126-8798-1ac742f7f6a7" containerID="03cb53a9b46072714cedfa60b48c0fe410794796648e018a9908637a0ded223e" exitCode=0 Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.152312 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd01eda3-4f09-4126-8798-1ac742f7f6a7","Type":"ContainerDied","Data":"03cb53a9b46072714cedfa60b48c0fe410794796648e018a9908637a0ded223e"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.152338 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd01eda3-4f09-4126-8798-1ac742f7f6a7","Type":"ContainerStarted","Data":"9e2660a568d9cb54be5be6f4545a4a1c29434dde58ef5d1a5c84d6adaad200e2"} Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.196229 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.198642 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.202561 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.297078 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.297115 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lm5\" (UniqueName: \"kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.297159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.351157 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-hvzq4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.351216 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-hvzq4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.351225 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hvzq4" podUID="d36adcbb-ecba-4efc-90cf-44471fcd3ce4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.351277 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hvzq4" podUID="d36adcbb-ecba-4efc-90cf-44471fcd3ce4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.398210 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.398340 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.398370 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lm5\" (UniqueName: \"kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.399210 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.399294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.419912 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lm5\" (UniqueName: \"kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5\") pod \"redhat-operators-rkp7f\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.483216 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:08:26 crc kubenswrapper[4786]: W0127 00:08:26.501920 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a49023_4d08_4aa5_9e39_c8c0aad82dbf.slice/crio-92b7996bb920075896dd8f113b4eb4f9fffa02613b61d56229753aa18a7b3d3d WatchSource:0}: Error finding container 92b7996bb920075896dd8f113b4eb4f9fffa02613b61d56229753aa18a7b3d3d: Status 404 returned error can't find the container with id 92b7996bb920075896dd8f113b4eb4f9fffa02613b61d56229753aa18a7b3d3d Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.561982 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.591398 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.755131 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.759779 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7k2vr" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.792494 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.797026 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:26 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:26 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:26 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.797089 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.862404 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vzpl" Jan 27 00:08:26 crc kubenswrapper[4786]: I0127 00:08:26.905144 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.171118 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerStarted","Data":"9ba2016a224c81fe5ed8b5a8bb3e8edde21a721d4c5a92bed7da55acf78aa3e4"} Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.177328 4786 generic.go:334] "Generic (PLEG): container finished" podID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerID="64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d" exitCode=0 Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.178353 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerDied","Data":"64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d"} Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.178385 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerStarted","Data":"92b7996bb920075896dd8f113b4eb4f9fffa02613b61d56229753aa18a7b3d3d"} Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.434320 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.520365 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir\") pod \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.520458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access\") pod \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\" (UID: \"fd01eda3-4f09-4126-8798-1ac742f7f6a7\") " Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.520475 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fd01eda3-4f09-4126-8798-1ac742f7f6a7" (UID: "fd01eda3-4f09-4126-8798-1ac742f7f6a7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.520784 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.525781 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fd01eda3-4f09-4126-8798-1ac742f7f6a7" (UID: "fd01eda3-4f09-4126-8798-1ac742f7f6a7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.622163 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd01eda3-4f09-4126-8798-1ac742f7f6a7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.752656 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 00:08:27 crc kubenswrapper[4786]: E0127 00:08:27.753015 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd01eda3-4f09-4126-8798-1ac742f7f6a7" containerName="pruner" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.753039 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd01eda3-4f09-4126-8798-1ac742f7f6a7" containerName="pruner" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.753163 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd01eda3-4f09-4126-8798-1ac742f7f6a7" containerName="pruner" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.753637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.754168 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.756272 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.756487 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.792665 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:27 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 00:08:27 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:27 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.792741 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.824608 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.824683 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.926471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.926543 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.926660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:27 crc kubenswrapper[4786]: I0127 00:08:27.948650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.070314 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.213851 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd01eda3-4f09-4126-8798-1ac742f7f6a7","Type":"ContainerDied","Data":"9e2660a568d9cb54be5be6f4545a4a1c29434dde58ef5d1a5c84d6adaad200e2"} Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.213889 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2660a568d9cb54be5be6f4545a4a1c29434dde58ef5d1a5c84d6adaad200e2" Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.213935 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.229321 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerID="16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc" exitCode=0 Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.229633 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerDied","Data":"16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc"} Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.377907 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 00:08:28 crc kubenswrapper[4786]: W0127 00:08:28.418354 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode6b037cb_0a89_48e5_bce8_2b6a83d359d1.slice/crio-605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f WatchSource:0}: Error finding container 605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f: Status 404 returned error can't find the container with id 605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.800597 4786 patch_prober.go:28] interesting pod/router-default-5444994796-qc8x5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 00:08:28 crc kubenswrapper[4786]: [+]has-synced ok Jan 27 00:08:28 crc kubenswrapper[4786]: [+]process-running ok Jan 27 00:08:28 crc kubenswrapper[4786]: healthz check failed Jan 27 00:08:28 crc kubenswrapper[4786]: I0127 00:08:28.800918 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qc8x5" podUID="f57635c5-4cbd-4ffc-9dc9-c9b24cb879eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 00:08:29 crc kubenswrapper[4786]: I0127 00:08:29.268200 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6b037cb-0a89-48e5-bce8-2b6a83d359d1","Type":"ContainerStarted","Data":"d9c339b4e5bae34b84ca85d1630a8264e7d0062cf2fc41bc4fe6a58b6ab39e3d"} Jan 27 00:08:29 crc kubenswrapper[4786]: I0127 00:08:29.268245 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6b037cb-0a89-48e5-bce8-2b6a83d359d1","Type":"ContainerStarted","Data":"605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f"} Jan 27 00:08:29 crc kubenswrapper[4786]: I0127 00:08:29.792945 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:29 crc kubenswrapper[4786]: I0127 00:08:29.797492 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qc8x5" Jan 27 00:08:30 crc kubenswrapper[4786]: I0127 00:08:30.287024 4786 generic.go:334] "Generic (PLEG): container finished" podID="e6b037cb-0a89-48e5-bce8-2b6a83d359d1" containerID="d9c339b4e5bae34b84ca85d1630a8264e7d0062cf2fc41bc4fe6a58b6ab39e3d" exitCode=0 Jan 27 00:08:30 crc kubenswrapper[4786]: I0127 00:08:30.287156 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6b037cb-0a89-48e5-bce8-2b6a83d359d1","Type":"ContainerDied","Data":"d9c339b4e5bae34b84ca85d1630a8264e7d0062cf2fc41bc4fe6a58b6ab39e3d"} Jan 27 00:08:31 crc kubenswrapper[4786]: I0127 00:08:31.911543 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9f82m" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.010378 4786 patch_prober.go:28] interesting pod/console-f9d7485db-tsvt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.010876 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tsvt8" podUID="e0b3fdf9-0ced-429c-8169-9cf860c9e46e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.653813 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.766184 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir\") pod \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.766264 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6b037cb-0a89-48e5-bce8-2b6a83d359d1" (UID: "e6b037cb-0a89-48e5-bce8-2b6a83d359d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.766462 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access\") pod \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\" (UID: \"e6b037cb-0a89-48e5-bce8-2b6a83d359d1\") " Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.766732 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.783490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6b037cb-0a89-48e5-bce8-2b6a83d359d1" (UID: "e6b037cb-0a89-48e5-bce8-2b6a83d359d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:35 crc kubenswrapper[4786]: I0127 00:08:35.868089 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6b037cb-0a89-48e5-bce8-2b6a83d359d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:36 crc kubenswrapper[4786]: I0127 00:08:36.330693 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6b037cb-0a89-48e5-bce8-2b6a83d359d1","Type":"ContainerDied","Data":"605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f"} Jan 27 00:08:36 crc kubenswrapper[4786]: I0127 00:08:36.330742 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605a55f9e25909d534cfe7cfc95264b4c3011c99865ccc38f42f99035d92331f" Jan 27 00:08:36 crc kubenswrapper[4786]: I0127 00:08:36.330842 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 00:08:36 crc kubenswrapper[4786]: I0127 00:08:36.357503 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hvzq4" Jan 27 00:08:36 crc kubenswrapper[4786]: E0127 00:08:36.366186 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pode6b037cb_0a89_48e5_bce8_2b6a83d359d1.slice\": RecentStats: unable to find data in memory cache]" Jan 27 00:08:37 crc kubenswrapper[4786]: I0127 00:08:37.654883 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:37 crc kubenswrapper[4786]: I0127 00:08:37.656848 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" containerID="cri-o://4a8016a1a4524500f4ed2496d7d82628c9cbea90a26d711238cdb5db9dc6912b" gracePeriod=30 Jan 27 00:08:37 crc kubenswrapper[4786]: I0127 00:08:37.663295 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:37 crc kubenswrapper[4786]: I0127 00:08:37.664565 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" podUID="415679b9-bb9e-4931-8260-103aa2e42237" containerName="route-controller-manager" containerID="cri-o://6ca4889e4895ee1cd8e258a50f8edbe177c3cf6164014ad14ff267919e943ee8" gracePeriod=30 Jan 27 00:08:38 crc kubenswrapper[4786]: I0127 00:08:38.344184 4786 generic.go:334] "Generic (PLEG): container finished" podID="415679b9-bb9e-4931-8260-103aa2e42237" containerID="6ca4889e4895ee1cd8e258a50f8edbe177c3cf6164014ad14ff267919e943ee8" exitCode=0 Jan 27 00:08:38 crc kubenswrapper[4786]: I0127 00:08:38.344256 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" event={"ID":"415679b9-bb9e-4931-8260-103aa2e42237","Type":"ContainerDied","Data":"6ca4889e4895ee1cd8e258a50f8edbe177c3cf6164014ad14ff267919e943ee8"} Jan 27 00:08:38 crc kubenswrapper[4786]: I0127 00:08:38.345602 4786 generic.go:334] "Generic (PLEG): container finished" podID="9317898f-297f-49d2-b0ae-811986544686" containerID="4a8016a1a4524500f4ed2496d7d82628c9cbea90a26d711238cdb5db9dc6912b" exitCode=0 Jan 27 00:08:38 crc kubenswrapper[4786]: I0127 00:08:38.345627 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" event={"ID":"9317898f-297f-49d2-b0ae-811986544686","Type":"ContainerDied","Data":"4a8016a1a4524500f4ed2496d7d82628c9cbea90a26d711238cdb5db9dc6912b"} Jan 27 00:08:39 crc kubenswrapper[4786]: I0127 00:08:39.110801 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:39 crc kubenswrapper[4786]: I0127 00:08:39.122909 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be80aa92-329a-4f72-9dbb-b717f533fffb-metrics-certs\") pod \"network-metrics-daemon-9czjg\" (UID: \"be80aa92-329a-4f72-9dbb-b717f533fffb\") " pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:39 crc kubenswrapper[4786]: I0127 00:08:39.170894 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9czjg" Jan 27 00:08:42 crc kubenswrapper[4786]: I0127 00:08:42.771153 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:08:45 crc kubenswrapper[4786]: I0127 00:08:45.013429 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:45 crc kubenswrapper[4786]: I0127 00:08:45.017974 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tsvt8" Jan 27 00:08:45 crc kubenswrapper[4786]: I0127 00:08:45.723959 4786 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-46v4c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 00:08:45 crc kubenswrapper[4786]: I0127 00:08:45.724224 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" podUID="415679b9-bb9e-4931-8260-103aa2e42237" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 00:08:46 crc kubenswrapper[4786]: I0127 00:08:46.450771 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7x9t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 00:08:46 crc kubenswrapper[4786]: I0127 00:08:46.450843 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 00:08:47 crc kubenswrapper[4786]: I0127 00:08:47.411705 4786 generic.go:334] "Generic (PLEG): container finished" podID="0200bfc5-4623-473a-822f-34fe28080b1c" containerID="12cb81b0bbf952c697c8bac09f42d09c205e59f6e329d9dd0c908ac727ce8a45" exitCode=0 Jan 27 00:08:47 crc kubenswrapper[4786]: I0127 00:08:47.412070 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29491200-w2pbt" event={"ID":"0200bfc5-4623-473a-822f-34fe28080b1c","Type":"ContainerDied","Data":"12cb81b0bbf952c697c8bac09f42d09c205e59f6e329d9dd0c908ac727ce8a45"} Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.210130 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.249275 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:50 crc kubenswrapper[4786]: E0127 00:08:50.250685 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b037cb-0a89-48e5-bce8-2b6a83d359d1" containerName="pruner" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.251932 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b037cb-0a89-48e5-bce8-2b6a83d359d1" containerName="pruner" Jan 27 00:08:50 crc kubenswrapper[4786]: E0127 00:08:50.251955 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415679b9-bb9e-4931-8260-103aa2e42237" containerName="route-controller-manager" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.251962 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="415679b9-bb9e-4931-8260-103aa2e42237" containerName="route-controller-manager" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.252064 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b037cb-0a89-48e5-bce8-2b6a83d359d1" containerName="pruner" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.252077 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="415679b9-bb9e-4931-8260-103aa2e42237" containerName="route-controller-manager" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.252424 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.258926 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.344868 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.345279 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397183 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca\") pod \"415679b9-bb9e-4931-8260-103aa2e42237\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397231 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config\") pod \"415679b9-bb9e-4931-8260-103aa2e42237\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397296 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5l2m\" (UniqueName: \"kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m\") pod \"415679b9-bb9e-4931-8260-103aa2e42237\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397360 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert\") pod \"415679b9-bb9e-4931-8260-103aa2e42237\" (UID: \"415679b9-bb9e-4931-8260-103aa2e42237\") " Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397618 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397677 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxsr\" (UniqueName: \"kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.397984 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.398005 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca" (OuterVolumeSpecName: "client-ca") pod "415679b9-bb9e-4931-8260-103aa2e42237" (UID: "415679b9-bb9e-4931-8260-103aa2e42237"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.398036 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config" (OuterVolumeSpecName: "config") pod "415679b9-bb9e-4931-8260-103aa2e42237" (UID: "415679b9-bb9e-4931-8260-103aa2e42237"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.405983 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "415679b9-bb9e-4931-8260-103aa2e42237" (UID: "415679b9-bb9e-4931-8260-103aa2e42237"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.406318 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m" (OuterVolumeSpecName: "kube-api-access-t5l2m") pod "415679b9-bb9e-4931-8260-103aa2e42237" (UID: "415679b9-bb9e-4931-8260-103aa2e42237"). InnerVolumeSpecName "kube-api-access-t5l2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.430375 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" event={"ID":"415679b9-bb9e-4931-8260-103aa2e42237","Type":"ContainerDied","Data":"8bd4a403d08e016fb963bbf86f2a4d41dd937aa38c9322e16470d3b4c82a109a"} Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.430402 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.430605 4786 scope.go:117] "RemoveContainer" containerID="6ca4889e4895ee1cd8e258a50f8edbe177c3cf6164014ad14ff267919e943ee8" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.467091 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.470311 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-46v4c"] Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499605 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxsr\" (UniqueName: \"kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499685 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499747 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499823 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499889 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499911 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415679b9-bb9e-4931-8260-103aa2e42237-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499923 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5l2m\" (UniqueName: \"kubernetes.io/projected/415679b9-bb9e-4931-8260-103aa2e42237-kube-api-access-t5l2m\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.499937 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415679b9-bb9e-4931-8260-103aa2e42237-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.500918 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.501859 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.504300 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.517642 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxsr\" (UniqueName: \"kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr\") pod \"route-controller-manager-8576df8657-t5wdw\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:50 crc kubenswrapper[4786]: I0127 00:08:50.570245 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:51 crc kubenswrapper[4786]: I0127 00:08:51.158026 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415679b9-bb9e-4931-8260-103aa2e42237" path="/var/lib/kubelet/pods/415679b9-bb9e-4931-8260-103aa2e42237/volumes" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.436962 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.446809 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.449162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" event={"ID":"9317898f-297f-49d2-b0ae-811986544686","Type":"ContainerDied","Data":"50bb11baf8a4c9413bda7f3deab8bbe3b8079a2c85e188da8f9b3d4b157c512d"} Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.449226 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7x9t" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.451514 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29491200-w2pbt" event={"ID":"0200bfc5-4623-473a-822f-34fe28080b1c","Type":"ContainerDied","Data":"2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57"} Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.451549 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2418910e6f407a37c26ebcc83adbfa2a878f8f4af7e889b3fba46d37a2ccce57" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.451608 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29491200-w2pbt" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.471450 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:08:53 crc kubenswrapper[4786]: E0127 00:08:53.481838 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.481855 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" Jan 27 00:08:53 crc kubenswrapper[4786]: E0127 00:08:53.481867 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0200bfc5-4623-473a-822f-34fe28080b1c" containerName="image-pruner" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.481873 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0200bfc5-4623-473a-822f-34fe28080b1c" containerName="image-pruner" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.482309 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0200bfc5-4623-473a-822f-34fe28080b1c" containerName="image-pruner" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.482322 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9317898f-297f-49d2-b0ae-811986544686" containerName="controller-manager" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.482682 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.482753 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.499799 4786 scope.go:117] "RemoveContainer" containerID="4a8016a1a4524500f4ed2496d7d82628c9cbea90a26d711238cdb5db9dc6912b" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.563928 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca\") pod \"9317898f-297f-49d2-b0ae-811986544686\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564010 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca\") pod \"0200bfc5-4623-473a-822f-34fe28080b1c\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564041 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmbz\" (UniqueName: \"kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz\") pod \"0200bfc5-4623-473a-822f-34fe28080b1c\" (UID: \"0200bfc5-4623-473a-822f-34fe28080b1c\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert\") pod \"9317898f-297f-49d2-b0ae-811986544686\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564126 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrqzq\" (UniqueName: \"kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq\") pod \"9317898f-297f-49d2-b0ae-811986544686\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564170 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles\") pod \"9317898f-297f-49d2-b0ae-811986544686\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.564246 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config\") pod \"9317898f-297f-49d2-b0ae-811986544686\" (UID: \"9317898f-297f-49d2-b0ae-811986544686\") " Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.565184 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca" (OuterVolumeSpecName: "serviceca") pod "0200bfc5-4623-473a-822f-34fe28080b1c" (UID: "0200bfc5-4623-473a-822f-34fe28080b1c"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.565440 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config" (OuterVolumeSpecName: "config") pod "9317898f-297f-49d2-b0ae-811986544686" (UID: "9317898f-297f-49d2-b0ae-811986544686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.565457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9317898f-297f-49d2-b0ae-811986544686" (UID: "9317898f-297f-49d2-b0ae-811986544686"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.565993 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca" (OuterVolumeSpecName: "client-ca") pod "9317898f-297f-49d2-b0ae-811986544686" (UID: "9317898f-297f-49d2-b0ae-811986544686"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.571089 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz" (OuterVolumeSpecName: "kube-api-access-vjmbz") pod "0200bfc5-4623-473a-822f-34fe28080b1c" (UID: "0200bfc5-4623-473a-822f-34fe28080b1c"). InnerVolumeSpecName "kube-api-access-vjmbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.571848 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9317898f-297f-49d2-b0ae-811986544686" (UID: "9317898f-297f-49d2-b0ae-811986544686"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.587884 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq" (OuterVolumeSpecName: "kube-api-access-xrqzq") pod "9317898f-297f-49d2-b0ae-811986544686" (UID: "9317898f-297f-49d2-b0ae-811986544686"). InnerVolumeSpecName "kube-api-access-xrqzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665178 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665441 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665490 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9lw\" (UniqueName: \"kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665532 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665559 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665623 4786 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0200bfc5-4623-473a-822f-34fe28080b1c-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665634 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmbz\" (UniqueName: \"kubernetes.io/projected/0200bfc5-4623-473a-822f-34fe28080b1c-kube-api-access-vjmbz\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665642 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9317898f-297f-49d2-b0ae-811986544686-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665651 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrqzq\" (UniqueName: \"kubernetes.io/projected/9317898f-297f-49d2-b0ae-811986544686-kube-api-access-xrqzq\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665696 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665802 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.665841 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9317898f-297f-49d2-b0ae-811986544686-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.698727 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9czjg"] Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.764440 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.767050 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.767117 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.767170 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9lw\" (UniqueName: \"kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.767232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.767282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.769037 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.770377 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.773299 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.782991 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.793279 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9lw\" (UniqueName: \"kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw\") pod \"controller-manager-58c776f899-5vc22\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.801148 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:53 crc kubenswrapper[4786]: W0127 00:08:53.808754 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c741126_0e92_4984_916d_3ed5bd525b80.slice/crio-1130a6be1be973fa5003b71345a3c235e65e7e39b7c7b3651a52c66a7c10cb3b WatchSource:0}: Error finding container 1130a6be1be973fa5003b71345a3c235e65e7e39b7c7b3651a52c66a7c10cb3b: Status 404 returned error can't find the container with id 1130a6be1be973fa5003b71345a3c235e65e7e39b7c7b3651a52c66a7c10cb3b Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.821548 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:53 crc kubenswrapper[4786]: I0127 00:08:53.824827 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7x9t"] Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.106664 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:08:54 crc kubenswrapper[4786]: W0127 00:08:54.166189 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d9d5fed_4a31_44c4_ab19_9b5e7dcd6dcc.slice/crio-9d21ad4c70cd1524717a7cc4c718e2b9eaba70fc3f85dcf17f6496a98ff56dd1 WatchSource:0}: Error finding container 9d21ad4c70cd1524717a7cc4c718e2b9eaba70fc3f85dcf17f6496a98ff56dd1: Status 404 returned error can't find the container with id 9d21ad4c70cd1524717a7cc4c718e2b9eaba70fc3f85dcf17f6496a98ff56dd1 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.459899 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerStarted","Data":"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.463419 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" event={"ID":"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc","Type":"ContainerStarted","Data":"9d21ad4c70cd1524717a7cc4c718e2b9eaba70fc3f85dcf17f6496a98ff56dd1"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.465682 4786 generic.go:334] "Generic (PLEG): container finished" podID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerID="62406233c6c647f7a51cae5dd760b4ec7362bfa391588fa663cb440a0daf2f39" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.465780 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerDied","Data":"62406233c6c647f7a51cae5dd760b4ec7362bfa391588fa663cb440a0daf2f39"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.474202 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9czjg" event={"ID":"be80aa92-329a-4f72-9dbb-b717f533fffb","Type":"ContainerStarted","Data":"aa67b0ac319e76d5d884be31e6e2f472a5e27eec692873f056d37e6c8434ea63"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.474253 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9czjg" event={"ID":"be80aa92-329a-4f72-9dbb-b717f533fffb","Type":"ContainerStarted","Data":"655db238fda5961248dc5f4c28af2c387d091323496475771f9588435a57af84"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.475923 4786 generic.go:334] "Generic (PLEG): container finished" podID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerID="1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.476082 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerDied","Data":"1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.479074 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" event={"ID":"0c741126-0e92-4984-916d-3ed5bd525b80","Type":"ContainerStarted","Data":"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.479124 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" event={"ID":"0c741126-0e92-4984-916d-3ed5bd525b80","Type":"ContainerStarted","Data":"1130a6be1be973fa5003b71345a3c235e65e7e39b7c7b3651a52c66a7c10cb3b"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.479864 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.482597 4786 generic.go:334] "Generic (PLEG): container finished" podID="647f6a23-ad88-4898-93fb-c19880c9d204" containerID="c32e142f279159ef9f0ae44bfd15f4b9907cee8114827e567009773fd5b1c294" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.482671 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerDied","Data":"c32e142f279159ef9f0ae44bfd15f4b9907cee8114827e567009773fd5b1c294"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.485514 4786 generic.go:334] "Generic (PLEG): container finished" podID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerID="fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.485584 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerDied","Data":"fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.497483 4786 generic.go:334] "Generic (PLEG): container finished" podID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerID="094fbb2ac2dc036b94d1c3ad328be6e40f77789744525d3c37160efc235ae96d" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.497830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerDied","Data":"094fbb2ac2dc036b94d1c3ad328be6e40f77789744525d3c37160efc235ae96d"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.502983 4786 generic.go:334] "Generic (PLEG): container finished" podID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerID="4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045" exitCode=0 Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.503051 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerDied","Data":"4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.506669 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerStarted","Data":"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b"} Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.553390 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" podStartSLOduration=17.553375295 podStartE2EDuration="17.553375295s" podCreationTimestamp="2026-01-27 00:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:54.551517502 +0000 UTC m=+180.035204555" watchObservedRunningTime="2026-01-27 00:08:54.553375295 +0000 UTC m=+180.037062338" Jan 27 00:08:54 crc kubenswrapper[4786]: I0127 00:08:54.891437 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.156254 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9317898f-297f-49d2-b0ae-811986544686" path="/var/lib/kubelet/pods/9317898f-297f-49d2-b0ae-811986544686/volumes" Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.514489 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" event={"ID":"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc","Type":"ContainerStarted","Data":"e75ddb8f00889c91c349e6baf398f4affbaa3acd2e491596a6568c89c5baf485"} Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.514876 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.516213 4786 generic.go:334] "Generic (PLEG): container finished" podID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerID="e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b" exitCode=0 Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.516283 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerDied","Data":"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b"} Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.521237 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.529759 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9czjg" event={"ID":"be80aa92-329a-4f72-9dbb-b717f533fffb","Type":"ContainerStarted","Data":"47fe9398e60e54671c9bbf839cf6239ac091038e3549de50ac493093f493602e"} Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.532338 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerID="733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a" exitCode=0 Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.533175 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerDied","Data":"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a"} Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.558271 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" podStartSLOduration=18.558254281 podStartE2EDuration="18.558254281s" podCreationTimestamp="2026-01-27 00:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:55.540935878 +0000 UTC m=+181.024622921" watchObservedRunningTime="2026-01-27 00:08:55.558254281 +0000 UTC m=+181.041941324" Jan 27 00:08:55 crc kubenswrapper[4786]: I0127 00:08:55.571555 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9czjg" podStartSLOduration=159.57153229 podStartE2EDuration="2m39.57153229s" podCreationTimestamp="2026-01-27 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:08:55.570165721 +0000 UTC m=+181.053852784" watchObservedRunningTime="2026-01-27 00:08:55.57153229 +0000 UTC m=+181.055219333" Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.122939 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.165279 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.180723 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xjvbw" Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.557974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerStarted","Data":"87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d"} Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.558092 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" podUID="0c741126-0e92-4984-916d-3ed5bd525b80" containerName="route-controller-manager" containerID="cri-o://0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d" gracePeriod=30 Jan 27 00:08:57 crc kubenswrapper[4786]: I0127 00:08:57.588520 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wmtm" podStartSLOduration=3.072841493 podStartE2EDuration="34.588501336s" podCreationTimestamp="2026-01-27 00:08:23 +0000 UTC" firstStartedPulling="2026-01-27 00:08:25.022148562 +0000 UTC m=+150.505835605" lastFinishedPulling="2026-01-27 00:08:56.537808405 +0000 UTC m=+182.021495448" observedRunningTime="2026-01-27 00:08:57.581524718 +0000 UTC m=+183.065211761" watchObservedRunningTime="2026-01-27 00:08:57.588501336 +0000 UTC m=+183.072188389" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.322881 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.353982 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:08:58 crc kubenswrapper[4786]: E0127 00:08:58.354257 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c741126-0e92-4984-916d-3ed5bd525b80" containerName="route-controller-manager" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.354274 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c741126-0e92-4984-916d-3ed5bd525b80" containerName="route-controller-manager" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.354422 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c741126-0e92-4984-916d-3ed5bd525b80" containerName="route-controller-manager" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.354904 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.364498 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.435414 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxsr\" (UniqueName: \"kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr\") pod \"0c741126-0e92-4984-916d-3ed5bd525b80\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.435474 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert\") pod \"0c741126-0e92-4984-916d-3ed5bd525b80\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.435593 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca\") pod \"0c741126-0e92-4984-916d-3ed5bd525b80\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.435621 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config\") pod \"0c741126-0e92-4984-916d-3ed5bd525b80\" (UID: \"0c741126-0e92-4984-916d-3ed5bd525b80\") " Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.436444 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config" (OuterVolumeSpecName: "config") pod "0c741126-0e92-4984-916d-3ed5bd525b80" (UID: "0c741126-0e92-4984-916d-3ed5bd525b80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.436921 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c741126-0e92-4984-916d-3ed5bd525b80" (UID: "0c741126-0e92-4984-916d-3ed5bd525b80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.441725 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c741126-0e92-4984-916d-3ed5bd525b80" (UID: "0c741126-0e92-4984-916d-3ed5bd525b80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.452335 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr" (OuterVolumeSpecName: "kube-api-access-dqxsr") pod "0c741126-0e92-4984-916d-3ed5bd525b80" (UID: "0c741126-0e92-4984-916d-3ed5bd525b80"). InnerVolumeSpecName "kube-api-access-dqxsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542432 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542489 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6rj\" (UniqueName: \"kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542866 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542885 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c741126-0e92-4984-916d-3ed5bd525b80-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542899 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxsr\" (UniqueName: \"kubernetes.io/projected/0c741126-0e92-4984-916d-3ed5bd525b80-kube-api-access-dqxsr\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.542913 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c741126-0e92-4984-916d-3ed5bd525b80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.566656 4786 generic.go:334] "Generic (PLEG): container finished" podID="0c741126-0e92-4984-916d-3ed5bd525b80" containerID="0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d" exitCode=0 Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.566744 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" event={"ID":"0c741126-0e92-4984-916d-3ed5bd525b80","Type":"ContainerDied","Data":"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d"} Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.566767 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.566790 4786 scope.go:117] "RemoveContainer" containerID="0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.566775 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw" event={"ID":"0c741126-0e92-4984-916d-3ed5bd525b80","Type":"ContainerDied","Data":"1130a6be1be973fa5003b71345a3c235e65e7e39b7c7b3651a52c66a7c10cb3b"} Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.572052 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerStarted","Data":"333e857079a760d25cefc8dbf4ca460fb1a31626fc0bc05bbc6b58a756285357"} Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.572180 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" podUID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" containerName="controller-manager" containerID="cri-o://e75ddb8f00889c91c349e6baf398f4affbaa3acd2e491596a6568c89c5baf485" gracePeriod=30 Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.598176 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c76nj" podStartSLOduration=3.147179448 podStartE2EDuration="34.598150228s" podCreationTimestamp="2026-01-27 00:08:24 +0000 UTC" firstStartedPulling="2026-01-27 00:08:26.158238616 +0000 UTC m=+151.641925659" lastFinishedPulling="2026-01-27 00:08:57.609209396 +0000 UTC m=+183.092896439" observedRunningTime="2026-01-27 00:08:58.589519182 +0000 UTC m=+184.073206225" watchObservedRunningTime="2026-01-27 00:08:58.598150228 +0000 UTC m=+184.081837311" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.609263 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.612450 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8576df8657-t5wdw"] Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.644283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.644340 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.644385 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h6rj\" (UniqueName: \"kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.644429 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.646458 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.647457 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.648798 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.665632 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h6rj\" (UniqueName: \"kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj\") pod \"route-controller-manager-77cc874cc-bshvt\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:58 crc kubenswrapper[4786]: I0127 00:08:58.684298 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.048313 4786 scope.go:117] "RemoveContainer" containerID="0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d" Jan 27 00:08:59 crc kubenswrapper[4786]: E0127 00:08:59.051123 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d\": container with ID starting with 0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d not found: ID does not exist" containerID="0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.051318 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d"} err="failed to get container status \"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d\": rpc error: code = NotFound desc = could not find container \"0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d\": container with ID starting with 0f79dd3407f62e16445f5c9ff4014e375e6e9ab48b5ca2b23581d86482c57d4d not found: ID does not exist" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.161343 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c741126-0e92-4984-916d-3ed5bd525b80" path="/var/lib/kubelet/pods/0c741126-0e92-4984-916d-3ed5bd525b80/volumes" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.465840 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:08:59 crc kubenswrapper[4786]: W0127 00:08:59.473608 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ccd73c5_ade2_4cb7_ac13_3a454ad349b5.slice/crio-e23faae74fe57955ece4baf4fd48bf296ce3c22d5723770163aff045044a09b5 WatchSource:0}: Error finding container e23faae74fe57955ece4baf4fd48bf296ce3c22d5723770163aff045044a09b5: Status 404 returned error can't find the container with id e23faae74fe57955ece4baf4fd48bf296ce3c22d5723770163aff045044a09b5 Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.598069 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerStarted","Data":"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f"} Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.600439 4786 generic.go:334] "Generic (PLEG): container finished" podID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" containerID="e75ddb8f00889c91c349e6baf398f4affbaa3acd2e491596a6568c89c5baf485" exitCode=0 Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.600502 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" event={"ID":"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc","Type":"ContainerDied","Data":"e75ddb8f00889c91c349e6baf398f4affbaa3acd2e491596a6568c89c5baf485"} Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.601841 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" event={"ID":"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5","Type":"ContainerStarted","Data":"e23faae74fe57955ece4baf4fd48bf296ce3c22d5723770163aff045044a09b5"} Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.749235 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.860807 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert\") pod \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.860862 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles\") pod \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.860929 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw9lw\" (UniqueName: \"kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw\") pod \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.860964 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config\") pod \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.861010 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca\") pod \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\" (UID: \"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc\") " Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.861852 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" (UID: "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.863116 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" (UID: "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.863134 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config" (OuterVolumeSpecName: "config") pod "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" (UID: "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.867293 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw" (OuterVolumeSpecName: "kube-api-access-hw9lw") pod "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" (UID: "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc"). InnerVolumeSpecName "kube-api-access-hw9lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.870102 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" (UID: "1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.962783 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.962822 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.962834 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.962847 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:08:59 crc kubenswrapper[4786]: I0127 00:08:59.962860 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw9lw\" (UniqueName: \"kubernetes.io/projected/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc-kube-api-access-hw9lw\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.607206 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.610318 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c776f899-5vc22" event={"ID":"1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc","Type":"ContainerDied","Data":"9d21ad4c70cd1524717a7cc4c718e2b9eaba70fc3f85dcf17f6496a98ff56dd1"} Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.610356 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" event={"ID":"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5","Type":"ContainerStarted","Data":"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c"} Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.610370 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.610717 4786 scope.go:117] "RemoveContainer" containerID="e75ddb8f00889c91c349e6baf398f4affbaa3acd2e491596a6568c89c5baf485" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.616272 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.633099 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wgj2x" podStartSLOduration=3.741877489 podStartE2EDuration="36.633081507s" podCreationTimestamp="2026-01-27 00:08:24 +0000 UTC" firstStartedPulling="2026-01-27 00:08:26.158802352 +0000 UTC m=+151.642489395" lastFinishedPulling="2026-01-27 00:08:59.05000636 +0000 UTC m=+184.533693413" observedRunningTime="2026-01-27 00:09:00.629971598 +0000 UTC m=+186.113658661" watchObservedRunningTime="2026-01-27 00:09:00.633081507 +0000 UTC m=+186.116768550" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.648855 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" podStartSLOduration=3.648720182 podStartE2EDuration="3.648720182s" podCreationTimestamp="2026-01-27 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:00.643696909 +0000 UTC m=+186.127383952" watchObservedRunningTime="2026-01-27 00:09:00.648720182 +0000 UTC m=+186.132407225" Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.658512 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:09:00 crc kubenswrapper[4786]: I0127 00:09:00.661237 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c776f899-5vc22"] Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.129371 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:01 crc kubenswrapper[4786]: E0127 00:09:01.129666 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" containerName="controller-manager" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.129684 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" containerName="controller-manager" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.129815 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" containerName="controller-manager" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.130273 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.141780 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.145228 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.145723 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.146265 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.146405 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.153918 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.154287 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc" path="/var/lib/kubelet/pods/1d9d5fed-4a31-44c4-ab19-9b5e7dcd6dcc/volumes" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.155639 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.162337 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.278257 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.278340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.278382 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.278406 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.278441 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76n2j\" (UniqueName: \"kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.379142 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.379212 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.379240 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.379264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.379294 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76n2j\" (UniqueName: \"kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.380683 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.381842 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.382129 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.391326 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.398976 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76n2j\" (UniqueName: \"kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j\") pod \"controller-manager-57dc9f4b95-pfd8x\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.447882 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.635102 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerStarted","Data":"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847"} Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.644361 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerStarted","Data":"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d"} Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.654808 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerStarted","Data":"3679913d9a1522a5b46ac7e80a79a47b56441f972e44af8b1a457ca306def64b"} Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.660212 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerStarted","Data":"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b"} Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.662832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerStarted","Data":"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12"} Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.698946 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4nmjh" podStartSLOduration=2.912880191 podStartE2EDuration="36.698921479s" podCreationTimestamp="2026-01-27 00:08:25 +0000 UTC" firstStartedPulling="2026-01-27 00:08:27.179811848 +0000 UTC m=+152.663498881" lastFinishedPulling="2026-01-27 00:09:00.965853126 +0000 UTC m=+186.449540169" observedRunningTime="2026-01-27 00:09:01.67610913 +0000 UTC m=+187.159796163" watchObservedRunningTime="2026-01-27 00:09:01.698921479 +0000 UTC m=+187.182608522" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.715521 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-npglr" podStartSLOduration=3.915058729 podStartE2EDuration="39.715501322s" podCreationTimestamp="2026-01-27 00:08:22 +0000 UTC" firstStartedPulling="2026-01-27 00:08:25.048181464 +0000 UTC m=+150.531868517" lastFinishedPulling="2026-01-27 00:09:00.848624067 +0000 UTC m=+186.332311110" observedRunningTime="2026-01-27 00:09:01.693631399 +0000 UTC m=+187.177318442" watchObservedRunningTime="2026-01-27 00:09:01.715501322 +0000 UTC m=+187.199188365" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.747922 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sgprl" podStartSLOduration=2.8473446019999997 podStartE2EDuration="39.747897275s" podCreationTimestamp="2026-01-27 00:08:22 +0000 UTC" firstStartedPulling="2026-01-27 00:08:23.948422404 +0000 UTC m=+149.432109447" lastFinishedPulling="2026-01-27 00:09:00.848975077 +0000 UTC m=+186.332662120" observedRunningTime="2026-01-27 00:09:01.716004456 +0000 UTC m=+187.199691499" watchObservedRunningTime="2026-01-27 00:09:01.747897275 +0000 UTC m=+187.231584318" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.748710 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rkp7f" podStartSLOduration=3.175953121 podStartE2EDuration="35.748703678s" podCreationTimestamp="2026-01-27 00:08:26 +0000 UTC" firstStartedPulling="2026-01-27 00:08:28.235427678 +0000 UTC m=+153.719114721" lastFinishedPulling="2026-01-27 00:09:00.808178235 +0000 UTC m=+186.291865278" observedRunningTime="2026-01-27 00:09:01.737605601 +0000 UTC m=+187.221292644" watchObservedRunningTime="2026-01-27 00:09:01.748703678 +0000 UTC m=+187.232390721" Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.754206 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:01 crc kubenswrapper[4786]: I0127 00:09:01.768608 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ml494" podStartSLOduration=3.959138274 podStartE2EDuration="39.768590734s" podCreationTimestamp="2026-01-27 00:08:22 +0000 UTC" firstStartedPulling="2026-01-27 00:08:25.053741122 +0000 UTC m=+150.537428185" lastFinishedPulling="2026-01-27 00:09:00.863193602 +0000 UTC m=+186.346880645" observedRunningTime="2026-01-27 00:09:01.761914054 +0000 UTC m=+187.245601087" watchObservedRunningTime="2026-01-27 00:09:01.768590734 +0000 UTC m=+187.252277777" Jan 27 00:09:02 crc kubenswrapper[4786]: I0127 00:09:02.668646 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" event={"ID":"cfa60283-2dfa-4dae-8515-3b9cd2df43b8","Type":"ContainerStarted","Data":"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6"} Jan 27 00:09:02 crc kubenswrapper[4786]: I0127 00:09:02.669022 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" event={"ID":"cfa60283-2dfa-4dae-8515-3b9cd2df43b8","Type":"ContainerStarted","Data":"4284dfbab46a150b243543dd38596d44d3778c98c03740213ad3e9a0284c6d13"} Jan 27 00:09:02 crc kubenswrapper[4786]: I0127 00:09:02.688098 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" podStartSLOduration=5.688080168 podStartE2EDuration="5.688080168s" podCreationTimestamp="2026-01-27 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:02.684559297 +0000 UTC m=+188.168246360" watchObservedRunningTime="2026-01-27 00:09:02.688080168 +0000 UTC m=+188.171767231" Jan 27 00:09:02 crc kubenswrapper[4786]: I0127 00:09:02.904970 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:02 crc kubenswrapper[4786]: I0127 00:09:02.905246 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.138834 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.140035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.140041 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.140161 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.142152 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.142647 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.156528 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.310941 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.326084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.326396 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.371673 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.371929 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.419994 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.427649 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.427732 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.430923 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.464264 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.494843 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.506770 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.507035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.552108 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.676718 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.682614 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.723630 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:03 crc kubenswrapper[4786]: I0127 00:09:03.760808 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.138211 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.276748 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-npglr" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="registry-server" probeResult="failure" output=< Jan 27 00:09:04 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 00:09:04 crc kubenswrapper[4786]: > Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.589683 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.622401 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.679069 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483","Type":"ContainerStarted","Data":"4d279cf547808f3a6d4d551d64c5d96b8a83add05496e4c04c775c96ef87ec2a"} Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.679123 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483","Type":"ContainerStarted","Data":"b3a186dd6a237014b11a03230895da7cd67e97445ef884b855e911c5a9aa0eb7"} Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.700890 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.700874256 podStartE2EDuration="1.700874256s" podCreationTimestamp="2026-01-27 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:04.698704214 +0000 UTC m=+190.182391257" watchObservedRunningTime="2026-01-27 00:09:04.700874256 +0000 UTC m=+190.184561299" Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.911542 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.911617 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:04 crc kubenswrapper[4786]: I0127 00:09:04.968493 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.354377 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.354715 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.437387 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.685343 4786 generic.go:334] "Generic (PLEG): container finished" podID="b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" containerID="4d279cf547808f3a6d4d551d64c5d96b8a83add05496e4c04c775c96ef87ec2a" exitCode=0 Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.685392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483","Type":"ContainerDied","Data":"4d279cf547808f3a6d4d551d64c5d96b8a83add05496e4c04c775c96ef87ec2a"} Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.686127 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wmtm" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="registry-server" containerID="cri-o://87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d" gracePeriod=2 Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.732980 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:05 crc kubenswrapper[4786]: I0127 00:09:05.742180 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.099556 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.099619 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.562389 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.562437 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.691605 4786 generic.go:334] "Generic (PLEG): container finished" podID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerID="87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d" exitCode=0 Jan 27 00:09:06 crc kubenswrapper[4786]: I0127 00:09:06.692246 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerDied","Data":"87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d"} Jan 27 00:09:06 crc kubenswrapper[4786]: E0127 00:09:06.805368 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd7c9ae_7333_4ef6_adbb_4e7037046a0f.slice/crio-conmon-87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.007193 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.008052 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.090061 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir\") pod \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.090267 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access\") pod \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\" (UID: \"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483\") " Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.090454 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" (UID: "b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.105121 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" (UID: "b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.176053 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4nmjh" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="registry-server" probeResult="failure" output=< Jan 27 00:09:07 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 00:09:07 crc kubenswrapper[4786]: > Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.192622 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.192645 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.607015 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rkp7f" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="registry-server" probeResult="failure" output=< Jan 27 00:09:07 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 00:09:07 crc kubenswrapper[4786]: > Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.657941 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.698155 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wmtm" event={"ID":"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f","Type":"ContainerDied","Data":"65e1921f9d300e8b0a0d8a5bcb9b801c83fb9d56611d2c455d2cbe4a0c7537ae"} Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.698318 4786 scope.go:117] "RemoveContainer" containerID="87e7b2ac671aa9ef146db6c9f12fd1ea80b5c47f64ac8cf52924e54d6fc2202d" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.698359 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wmtm" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.699748 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content\") pod \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.702136 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.702127 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483","Type":"ContainerDied","Data":"b3a186dd6a237014b11a03230895da7cd67e97445ef884b855e911c5a9aa0eb7"} Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.702185 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a186dd6a237014b11a03230895da7cd67e97445ef884b855e911c5a9aa0eb7" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.706321 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities\") pod \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.706367 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8b29\" (UniqueName: \"kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29\") pod \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\" (UID: \"8cd7c9ae-7333-4ef6-adbb-4e7037046a0f\") " Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.707041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities" (OuterVolumeSpecName: "utilities") pod "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" (UID: "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.717792 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29" (OuterVolumeSpecName: "kube-api-access-r8b29") pod "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" (UID: "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f"). InnerVolumeSpecName "kube-api-access-r8b29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.720300 4786 scope.go:117] "RemoveContainer" containerID="094fbb2ac2dc036b94d1c3ad328be6e40f77789744525d3c37160efc235ae96d" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.746921 4786 scope.go:117] "RemoveContainer" containerID="ed971c17d5718f3d8fdf2836a127e7b1bdde45ac024dcfeb2e9df1970ab55db2" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.758081 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" (UID: "8cd7c9ae-7333-4ef6-adbb-4e7037046a0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.807889 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.807915 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:07 crc kubenswrapper[4786]: I0127 00:09:07.807926 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8b29\" (UniqueName: \"kubernetes.io/projected/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f-kube-api-access-r8b29\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:08 crc kubenswrapper[4786]: I0127 00:09:08.028257 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:09:08 crc kubenswrapper[4786]: I0127 00:09:08.032194 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wmtm"] Jan 27 00:09:08 crc kubenswrapper[4786]: I0127 00:09:08.709200 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c76nj" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="registry-server" containerID="cri-o://333e857079a760d25cefc8dbf4ca460fb1a31626fc0bc05bbc6b58a756285357" gracePeriod=2 Jan 27 00:09:09 crc kubenswrapper[4786]: I0127 00:09:09.153347 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" path="/var/lib/kubelet/pods/8cd7c9ae-7333-4ef6-adbb-4e7037046a0f/volumes" Jan 27 00:09:09 crc kubenswrapper[4786]: I0127 00:09:09.717536 4786 generic.go:334] "Generic (PLEG): container finished" podID="647f6a23-ad88-4898-93fb-c19880c9d204" containerID="333e857079a760d25cefc8dbf4ca460fb1a31626fc0bc05bbc6b58a756285357" exitCode=0 Jan 27 00:09:09 crc kubenswrapper[4786]: I0127 00:09:09.717598 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerDied","Data":"333e857079a760d25cefc8dbf4ca460fb1a31626fc0bc05bbc6b58a756285357"} Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.235957 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.345802 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjwd\" (UniqueName: \"kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd\") pod \"647f6a23-ad88-4898-93fb-c19880c9d204\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.345856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities\") pod \"647f6a23-ad88-4898-93fb-c19880c9d204\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.345897 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content\") pod \"647f6a23-ad88-4898-93fb-c19880c9d204\" (UID: \"647f6a23-ad88-4898-93fb-c19880c9d204\") " Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.347083 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities" (OuterVolumeSpecName: "utilities") pod "647f6a23-ad88-4898-93fb-c19880c9d204" (UID: "647f6a23-ad88-4898-93fb-c19880c9d204"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.354810 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd" (OuterVolumeSpecName: "kube-api-access-7hjwd") pod "647f6a23-ad88-4898-93fb-c19880c9d204" (UID: "647f6a23-ad88-4898-93fb-c19880c9d204"). InnerVolumeSpecName "kube-api-access-7hjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.375258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "647f6a23-ad88-4898-93fb-c19880c9d204" (UID: "647f6a23-ad88-4898-93fb-c19880c9d204"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.447331 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjwd\" (UniqueName: \"kubernetes.io/projected/647f6a23-ad88-4898-93fb-c19880c9d204-kube-api-access-7hjwd\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.447369 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.447380 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647f6a23-ad88-4898-93fb-c19880c9d204-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.727963 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c76nj" event={"ID":"647f6a23-ad88-4898-93fb-c19880c9d204","Type":"ContainerDied","Data":"0af4d7315ee12bbf653ea44fd3a01a0b6c4120c075698bf4359836ccc756bd38"} Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.728016 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c76nj" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.728033 4786 scope.go:117] "RemoveContainer" containerID="333e857079a760d25cefc8dbf4ca460fb1a31626fc0bc05bbc6b58a756285357" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.758315 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.761775 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c76nj"] Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.763656 4786 scope.go:117] "RemoveContainer" containerID="c32e142f279159ef9f0ae44bfd15f4b9907cee8114827e567009773fd5b1c294" Jan 27 00:09:10 crc kubenswrapper[4786]: I0127 00:09:10.779286 4786 scope.go:117] "RemoveContainer" containerID="7adc04610b915e8b3d7098fbceb3c58513ca36cd030222fd88d4c1be69a5e433" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.159346 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" path="/var/lib/kubelet/pods/647f6a23-ad88-4898-93fb-c19880c9d204/volumes" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.739482 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740098 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="extract-utilities" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740116 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="extract-utilities" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740129 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740137 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740149 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740156 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740176 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="extract-content" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740184 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="extract-content" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740202 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="extract-utilities" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740209 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="extract-utilities" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740219 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="extract-content" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740226 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="extract-content" Jan 27 00:09:11 crc kubenswrapper[4786]: E0127 00:09:11.740238 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" containerName="pruner" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740245 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" containerName="pruner" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740383 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b765e100-a7bc-4bfd-ae8e-0e9cf4ec5483" containerName="pruner" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740397 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="647f6a23-ad88-4898-93fb-c19880c9d204" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740405 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd7c9ae-7333-4ef6-adbb-4e7037046a0f" containerName="registry-server" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.740942 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.742983 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.743561 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.760302 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.766282 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.766372 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.766409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.867871 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.867958 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.868071 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.868188 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.868244 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:11 crc kubenswrapper[4786]: I0127 00:09:11.884040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access\") pod \"installer-9-crc\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:12 crc kubenswrapper[4786]: I0127 00:09:12.056903 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:12 crc kubenswrapper[4786]: I0127 00:09:12.491462 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 00:09:12 crc kubenswrapper[4786]: W0127 00:09:12.499030 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod25c53e44_d237_4830_a5f2_f4f5fdc62320.slice/crio-7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a WatchSource:0}: Error finding container 7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a: Status 404 returned error can't find the container with id 7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a Jan 27 00:09:12 crc kubenswrapper[4786]: I0127 00:09:12.741902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25c53e44-d237-4830-a5f2-f4f5fdc62320","Type":"ContainerStarted","Data":"7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a"} Jan 27 00:09:12 crc kubenswrapper[4786]: I0127 00:09:12.949276 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:13 crc kubenswrapper[4786]: I0127 00:09:13.179990 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:13 crc kubenswrapper[4786]: I0127 00:09:13.223781 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:13 crc kubenswrapper[4786]: I0127 00:09:13.413365 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:13 crc kubenswrapper[4786]: I0127 00:09:13.750508 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25c53e44-d237-4830-a5f2-f4f5fdc62320","Type":"ContainerStarted","Data":"aab15098eaab48b2fbc6b939fd2e5bc05790d6a0ed5db5b21adf9ee7827458e2"} Jan 27 00:09:13 crc kubenswrapper[4786]: I0127 00:09:13.780445 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.780416986 podStartE2EDuration="2.780416986s" podCreationTimestamp="2026-01-27 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:13.774517349 +0000 UTC m=+199.258204482" watchObservedRunningTime="2026-01-27 00:09:13.780416986 +0000 UTC m=+199.264104069" Jan 27 00:09:15 crc kubenswrapper[4786]: I0127 00:09:15.206612 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:09:15 crc kubenswrapper[4786]: I0127 00:09:15.207058 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ml494" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="registry-server" containerID="cri-o://3679913d9a1522a5b46ac7e80a79a47b56441f972e44af8b1a457ca306def64b" gracePeriod=2 Jan 27 00:09:15 crc kubenswrapper[4786]: I0127 00:09:15.768439 4786 generic.go:334] "Generic (PLEG): container finished" podID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerID="3679913d9a1522a5b46ac7e80a79a47b56441f972e44af8b1a457ca306def64b" exitCode=0 Jan 27 00:09:15 crc kubenswrapper[4786]: I0127 00:09:15.768506 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerDied","Data":"3679913d9a1522a5b46ac7e80a79a47b56441f972e44af8b1a457ca306def64b"} Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.144852 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.187309 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.236448 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.325717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities\") pod \"d742b67a-98f2-45be-abda-ad9ed29570b6\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.325764 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2wzf\" (UniqueName: \"kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf\") pod \"d742b67a-98f2-45be-abda-ad9ed29570b6\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.325783 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content\") pod \"d742b67a-98f2-45be-abda-ad9ed29570b6\" (UID: \"d742b67a-98f2-45be-abda-ad9ed29570b6\") " Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.326642 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities" (OuterVolumeSpecName: "utilities") pod "d742b67a-98f2-45be-abda-ad9ed29570b6" (UID: "d742b67a-98f2-45be-abda-ad9ed29570b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.334610 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf" (OuterVolumeSpecName: "kube-api-access-p2wzf") pod "d742b67a-98f2-45be-abda-ad9ed29570b6" (UID: "d742b67a-98f2-45be-abda-ad9ed29570b6"). InnerVolumeSpecName "kube-api-access-p2wzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.370616 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d742b67a-98f2-45be-abda-ad9ed29570b6" (UID: "d742b67a-98f2-45be-abda-ad9ed29570b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.426751 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.427085 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2wzf\" (UniqueName: \"kubernetes.io/projected/d742b67a-98f2-45be-abda-ad9ed29570b6-kube-api-access-p2wzf\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.427225 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d742b67a-98f2-45be-abda-ad9ed29570b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.602766 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.654755 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.782308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml494" event={"ID":"d742b67a-98f2-45be-abda-ad9ed29570b6","Type":"ContainerDied","Data":"fe23f8c5e07ae0c33de8ca3fd2c2d90b79f3b07d0166af214c2d899dbffc7171"} Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.782366 4786 scope.go:117] "RemoveContainer" containerID="3679913d9a1522a5b46ac7e80a79a47b56441f972e44af8b1a457ca306def64b" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.782332 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml494" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.803299 4786 scope.go:117] "RemoveContainer" containerID="62406233c6c647f7a51cae5dd760b4ec7362bfa391588fa663cb440a0daf2f39" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.829474 4786 scope.go:117] "RemoveContainer" containerID="64cf09dcbcbe39bcbfb0566353cc9169dc9d8d2dfb0a51cfc1f6bc55ed82f995" Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.855964 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:09:16 crc kubenswrapper[4786]: I0127 00:09:16.857254 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ml494"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.127545 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.128132 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" podUID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" containerName="controller-manager" containerID="cri-o://bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6" gracePeriod=30 Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.141851 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.142117 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" podUID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" containerName="route-controller-manager" containerID="cri-o://69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c" gracePeriod=30 Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.155410 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" path="/var/lib/kubelet/pods/d742b67a-98f2-45be-abda-ad9ed29570b6/volumes" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.619978 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.643541 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert\") pod \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.644378 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca\") pod \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.644410 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config\") pod \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.644442 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h6rj\" (UniqueName: \"kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj\") pod \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\" (UID: \"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.645618 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config" (OuterVolumeSpecName: "config") pod "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" (UID: "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.645713 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" (UID: "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.649767 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj" (OuterVolumeSpecName: "kube-api-access-2h6rj") pod "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" (UID: "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5"). InnerVolumeSpecName "kube-api-access-2h6rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.652270 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" (UID: "7ccd73c5-ade2-4cb7-ac13-3a454ad349b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.730000 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745520 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config\") pod \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745586 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca\") pod \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745665 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76n2j\" (UniqueName: \"kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j\") pod \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745687 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles\") pod \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert\") pod \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\" (UID: \"cfa60283-2dfa-4dae-8515-3b9cd2df43b8\") " Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745921 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745933 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745942 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h6rj\" (UniqueName: \"kubernetes.io/projected/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-kube-api-access-2h6rj\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.745952 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.746444 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config" (OuterVolumeSpecName: "config") pod "cfa60283-2dfa-4dae-8515-3b9cd2df43b8" (UID: "cfa60283-2dfa-4dae-8515-3b9cd2df43b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.746804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cfa60283-2dfa-4dae-8515-3b9cd2df43b8" (UID: "cfa60283-2dfa-4dae-8515-3b9cd2df43b8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.747025 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "cfa60283-2dfa-4dae-8515-3b9cd2df43b8" (UID: "cfa60283-2dfa-4dae-8515-3b9cd2df43b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.790563 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j" (OuterVolumeSpecName: "kube-api-access-76n2j") pod "cfa60283-2dfa-4dae-8515-3b9cd2df43b8" (UID: "cfa60283-2dfa-4dae-8515-3b9cd2df43b8"). InnerVolumeSpecName "kube-api-access-76n2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.790885 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cfa60283-2dfa-4dae-8515-3b9cd2df43b8" (UID: "cfa60283-2dfa-4dae-8515-3b9cd2df43b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.799444 4786 generic.go:334] "Generic (PLEG): container finished" podID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" containerID="bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6" exitCode=0 Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.799517 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" event={"ID":"cfa60283-2dfa-4dae-8515-3b9cd2df43b8","Type":"ContainerDied","Data":"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6"} Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.799548 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" event={"ID":"cfa60283-2dfa-4dae-8515-3b9cd2df43b8","Type":"ContainerDied","Data":"4284dfbab46a150b243543dd38596d44d3778c98c03740213ad3e9a0284c6d13"} Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.799603 4786 scope.go:117] "RemoveContainer" containerID="bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.799724 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.806676 4786 generic.go:334] "Generic (PLEG): container finished" podID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" containerID="69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c" exitCode=0 Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.806727 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.806737 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" event={"ID":"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5","Type":"ContainerDied","Data":"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c"} Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.806767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt" event={"ID":"7ccd73c5-ade2-4cb7-ac13-3a454ad349b5","Type":"ContainerDied","Data":"e23faae74fe57955ece4baf4fd48bf296ce3c22d5723770163aff045044a09b5"} Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.821730 4786 scope.go:117] "RemoveContainer" containerID="bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6" Jan 27 00:09:17 crc kubenswrapper[4786]: E0127 00:09:17.822096 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6\": container with ID starting with bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6 not found: ID does not exist" containerID="bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.822131 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6"} err="failed to get container status \"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6\": rpc error: code = NotFound desc = could not find container \"bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6\": container with ID starting with bc31349423ed6b2e2c8acafad94492398a03154c6a16b54edc74fd14bf8800e6 not found: ID does not exist" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.822156 4786 scope.go:117] "RemoveContainer" containerID="69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.834541 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.844293 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc874cc-bshvt"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846621 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846632 4786 scope.go:117] "RemoveContainer" containerID="69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846643 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846846 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76n2j\" (UniqueName: \"kubernetes.io/projected/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-kube-api-access-76n2j\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846865 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.846879 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfa60283-2dfa-4dae-8515-3b9cd2df43b8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:17 crc kubenswrapper[4786]: E0127 00:09:17.847210 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c\": container with ID starting with 69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c not found: ID does not exist" containerID="69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.847245 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c"} err="failed to get container status \"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c\": rpc error: code = NotFound desc = could not find container \"69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c\": container with ID starting with 69ad0c80893db33713614b6ecbe7038e27bea38e71f65550773ceee2901d488c not found: ID does not exist" Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.848235 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:17 crc kubenswrapper[4786]: I0127 00:09:17.851412 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57dc9f4b95-pfd8x"] Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.167629 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" path="/var/lib/kubelet/pods/7ccd73c5-ade2-4cb7-ac13-3a454ad349b5/volumes" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.170051 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" path="/var/lib/kubelet/pods/cfa60283-2dfa-4dae-8515-3b9cd2df43b8/volumes" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.170999 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:19 crc kubenswrapper[4786]: E0127 00:09:19.171347 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="extract-utilities" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171384 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="extract-utilities" Jan 27 00:09:19 crc kubenswrapper[4786]: E0127 00:09:19.171413 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" containerName="route-controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171432 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" containerName="route-controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: E0127 00:09:19.171456 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="extract-content" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171473 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="extract-content" Jan 27 00:09:19 crc kubenswrapper[4786]: E0127 00:09:19.171496 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="registry-server" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171512 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="registry-server" Jan 27 00:09:19 crc kubenswrapper[4786]: E0127 00:09:19.171534 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" containerName="controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171551 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" containerName="controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171807 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ccd73c5-ade2-4cb7-ac13-3a454ad349b5" containerName="route-controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171852 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa60283-2dfa-4dae-8515-3b9cd2df43b8" containerName="controller-manager" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.171879 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d742b67a-98f2-45be-abda-ad9ed29570b6" containerName="registry-server" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.172515 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.172793 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.175299 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.176009 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.176434 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.179621 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.181698 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.181702 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.181705 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.181831 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.181888 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.182369 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.182474 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.182538 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.183046 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.184262 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.187014 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.189480 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.264938 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265087 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjt5v\" (UniqueName: \"kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265174 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265250 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265344 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265433 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265535 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265624 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.265655 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24lj\" (UniqueName: \"kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367081 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367176 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367212 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24lj\" (UniqueName: \"kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367410 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjt5v\" (UniqueName: \"kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.367450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.369239 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.369248 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.369526 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.369708 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.369800 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.374199 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.376023 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.394178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24lj\" (UniqueName: \"kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj\") pod \"controller-manager-54c99bf74-8lcxj\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.397059 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjt5v\" (UniqueName: \"kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v\") pod \"route-controller-manager-cf6f7b8bc-88xgd\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.502259 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.528805 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.617618 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:09:19 crc kubenswrapper[4786]: I0127 00:09:19.618099 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rkp7f" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="registry-server" containerID="cri-o://9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12" gracePeriod=2 Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.115977 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:20 crc kubenswrapper[4786]: W0127 00:09:20.138187 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf2b675_3302_45db_b15e_a3ce60747252.slice/crio-2395d6a070e482d826ed9d7bfad95008c4f724be0470b7f19a115fe6f358f965 WatchSource:0}: Error finding container 2395d6a070e482d826ed9d7bfad95008c4f724be0470b7f19a115fe6f358f965: Status 404 returned error can't find the container with id 2395d6a070e482d826ed9d7bfad95008c4f724be0470b7f19a115fe6f358f965 Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.158168 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:20 crc kubenswrapper[4786]: W0127 00:09:20.164418 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eba73ff_9436_4d13_a0ec_a42a1cf2ad60.slice/crio-ac37ce7f0f6864a3c008a357d2b943af9f105e9b0b73e3d34c4876fd5301d7ef WatchSource:0}: Error finding container ac37ce7f0f6864a3c008a357d2b943af9f105e9b0b73e3d34c4876fd5301d7ef: Status 404 returned error can't find the container with id ac37ce7f0f6864a3c008a357d2b943af9f105e9b0b73e3d34c4876fd5301d7ef Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.344690 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.344971 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.345013 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.345600 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.345651 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8" gracePeriod=600 Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.767105 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.782067 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5lm5\" (UniqueName: \"kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5\") pod \"5b292c18-1899-422e-a805-a52c3b5d00b3\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.782214 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content\") pod \"5b292c18-1899-422e-a805-a52c3b5d00b3\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.782270 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities\") pod \"5b292c18-1899-422e-a805-a52c3b5d00b3\" (UID: \"5b292c18-1899-422e-a805-a52c3b5d00b3\") " Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.783404 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities" (OuterVolumeSpecName: "utilities") pod "5b292c18-1899-422e-a805-a52c3b5d00b3" (UID: "5b292c18-1899-422e-a805-a52c3b5d00b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.803122 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5" (OuterVolumeSpecName: "kube-api-access-n5lm5") pod "5b292c18-1899-422e-a805-a52c3b5d00b3" (UID: "5b292c18-1899-422e-a805-a52c3b5d00b3"). InnerVolumeSpecName "kube-api-access-n5lm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.838790 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" event={"ID":"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60","Type":"ContainerStarted","Data":"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.838833 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" event={"ID":"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60","Type":"ContainerStarted","Data":"ac37ce7f0f6864a3c008a357d2b943af9f105e9b0b73e3d34c4876fd5301d7ef"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.839882 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.845638 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.852808 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerID="9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12" exitCode=0 Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.852889 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerDied","Data":"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.852920 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkp7f" event={"ID":"5b292c18-1899-422e-a805-a52c3b5d00b3","Type":"ContainerDied","Data":"9ba2016a224c81fe5ed8b5a8bb3e8edde21a721d4c5a92bed7da55acf78aa3e4"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.852940 4786 scope.go:117] "RemoveContainer" containerID="9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.853072 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkp7f" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.856028 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8" exitCode=0 Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.856115 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.856162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.860802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" event={"ID":"acf2b675-3302-45db-b15e-a3ce60747252","Type":"ContainerStarted","Data":"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.860845 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" event={"ID":"acf2b675-3302-45db-b15e-a3ce60747252","Type":"ContainerStarted","Data":"2395d6a070e482d826ed9d7bfad95008c4f724be0470b7f19a115fe6f358f965"} Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.861349 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.863217 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" podStartSLOduration=3.86320392 podStartE2EDuration="3.86320392s" podCreationTimestamp="2026-01-27 00:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:20.85838038 +0000 UTC m=+206.342067433" watchObservedRunningTime="2026-01-27 00:09:20.86320392 +0000 UTC m=+206.346890973" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.867880 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.871526 4786 scope.go:117] "RemoveContainer" containerID="733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.884405 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.884436 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5lm5\" (UniqueName: \"kubernetes.io/projected/5b292c18-1899-422e-a805-a52c3b5d00b3-kube-api-access-n5lm5\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.904905 4786 scope.go:117] "RemoveContainer" containerID="16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.905606 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b292c18-1899-422e-a805-a52c3b5d00b3" (UID: "5b292c18-1899-422e-a805-a52c3b5d00b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.920934 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" podStartSLOduration=3.920916703 podStartE2EDuration="3.920916703s" podCreationTimestamp="2026-01-27 00:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:20.919221911 +0000 UTC m=+206.402908944" watchObservedRunningTime="2026-01-27 00:09:20.920916703 +0000 UTC m=+206.404603746" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.931867 4786 scope.go:117] "RemoveContainer" containerID="9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12" Jan 27 00:09:20 crc kubenswrapper[4786]: E0127 00:09:20.936181 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12\": container with ID starting with 9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12 not found: ID does not exist" containerID="9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.936224 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12"} err="failed to get container status \"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12\": rpc error: code = NotFound desc = could not find container \"9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12\": container with ID starting with 9a561e5d88ecdb97241b148adcf1d08529650ec739bac641f7ce16995ef55c12 not found: ID does not exist" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.936250 4786 scope.go:117] "RemoveContainer" containerID="733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a" Jan 27 00:09:20 crc kubenswrapper[4786]: E0127 00:09:20.937870 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a\": container with ID starting with 733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a not found: ID does not exist" containerID="733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.937903 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a"} err="failed to get container status \"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a\": rpc error: code = NotFound desc = could not find container \"733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a\": container with ID starting with 733e231b602a07f82de6d9dc80cd861d06d2b397f70fa07c0448db7f1653c65a not found: ID does not exist" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.937922 4786 scope.go:117] "RemoveContainer" containerID="16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc" Jan 27 00:09:20 crc kubenswrapper[4786]: E0127 00:09:20.938762 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc\": container with ID starting with 16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc not found: ID does not exist" containerID="16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.938821 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc"} err="failed to get container status \"16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc\": rpc error: code = NotFound desc = could not find container \"16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc\": container with ID starting with 16db9b3529688c1225e6a51c6fdc308a6cc28af00f677642b519feabe50bc6fc not found: ID does not exist" Jan 27 00:09:20 crc kubenswrapper[4786]: I0127 00:09:20.985597 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b292c18-1899-422e-a805-a52c3b5d00b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:21 crc kubenswrapper[4786]: I0127 00:09:21.193090 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:09:21 crc kubenswrapper[4786]: I0127 00:09:21.196335 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rkp7f"] Jan 27 00:09:23 crc kubenswrapper[4786]: I0127 00:09:23.158938 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" path="/var/lib/kubelet/pods/5b292c18-1899-422e-a805-a52c3b5d00b3/volumes" Jan 27 00:09:29 crc kubenswrapper[4786]: I0127 00:09:29.635911 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" podUID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" containerName="oauth-openshift" containerID="cri-o://8a3aad85d24360a9a6b0f953544d2662088b81d374c78ffeafab11add653bba3" gracePeriod=15 Jan 27 00:09:29 crc kubenswrapper[4786]: I0127 00:09:29.919856 4786 generic.go:334] "Generic (PLEG): container finished" podID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" containerID="8a3aad85d24360a9a6b0f953544d2662088b81d374c78ffeafab11add653bba3" exitCode=0 Jan 27 00:09:29 crc kubenswrapper[4786]: I0127 00:09:29.919968 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" event={"ID":"5c2aae55-7128-4ccc-bcff-ca7775e8035a","Type":"ContainerDied","Data":"8a3aad85d24360a9a6b0f953544d2662088b81d374c78ffeafab11add653bba3"} Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.288484 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323606 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323702 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323798 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323844 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323909 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.323961 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324038 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324116 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dzrz\" (UniqueName: \"kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324184 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324228 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324269 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.324424 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection\") pod \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\" (UID: \"5c2aae55-7128-4ccc-bcff-ca7775e8035a\") " Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.327136 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.328923 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.328986 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.328986 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.329218 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.333520 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz" (OuterVolumeSpecName: "kube-api-access-7dzrz") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "kube-api-access-7dzrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.333550 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.334638 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.335962 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.336989 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.337212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.338829 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.339450 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.339673 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5c2aae55-7128-4ccc-bcff-ca7775e8035a" (UID: "5c2aae55-7128-4ccc-bcff-ca7775e8035a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.426463 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.426745 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.426820 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.426884 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.426940 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427002 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427063 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427124 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427178 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427239 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427295 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427361 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c2aae55-7128-4ccc-bcff-ca7775e8035a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427417 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c2aae55-7128-4ccc-bcff-ca7775e8035a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.427479 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dzrz\" (UniqueName: \"kubernetes.io/projected/5c2aae55-7128-4ccc-bcff-ca7775e8035a-kube-api-access-7dzrz\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.929208 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" event={"ID":"5c2aae55-7128-4ccc-bcff-ca7775e8035a","Type":"ContainerDied","Data":"574083d8a3b3d608c3767277651d9545bda1833fde574addabf77da4e8ae9e08"} Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.929295 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vzqm5" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.930829 4786 scope.go:117] "RemoveContainer" containerID="8a3aad85d24360a9a6b0f953544d2662088b81d374c78ffeafab11add653bba3" Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.983332 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:09:30 crc kubenswrapper[4786]: I0127 00:09:30.986611 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vzqm5"] Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.160662 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" path="/var/lib/kubelet/pods/5c2aae55-7128-4ccc-bcff-ca7775e8035a/volumes" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.174526 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm"] Jan 27 00:09:31 crc kubenswrapper[4786]: E0127 00:09:31.174851 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="extract-utilities" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.174879 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="extract-utilities" Jan 27 00:09:31 crc kubenswrapper[4786]: E0127 00:09:31.174899 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" containerName="oauth-openshift" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.174912 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" containerName="oauth-openshift" Jan 27 00:09:31 crc kubenswrapper[4786]: E0127 00:09:31.174943 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="extract-content" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.174957 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="extract-content" Jan 27 00:09:31 crc kubenswrapper[4786]: E0127 00:09:31.174982 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="registry-server" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.174994 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="registry-server" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.175180 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2aae55-7128-4ccc-bcff-ca7775e8035a" containerName="oauth-openshift" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.175206 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b292c18-1899-422e-a805-a52c3b5d00b3" containerName="registry-server" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.175801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.179637 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.179986 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.181192 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.181266 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.181290 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.181681 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.181859 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.182014 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.183474 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.183623 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.183665 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.183842 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.198597 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.208565 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.208757 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm"] Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.214498 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237747 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237824 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4l9\" (UniqueName: \"kubernetes.io/projected/d841f596-a274-4109-b899-76f3029098a8-kube-api-access-sf4l9\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237842 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237860 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237875 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237901 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d841f596-a274-4109-b899-76f3029098a8-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237919 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.237984 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.238025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.238040 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.238057 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.238097 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.238113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.339727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.339827 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.339926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.339968 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340071 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340111 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340142 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4l9\" (UniqueName: \"kubernetes.io/projected/d841f596-a274-4109-b899-76f3029098a8-kube-api-access-sf4l9\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340176 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340263 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d841f596-a274-4109-b899-76f3029098a8-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340301 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340419 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.340999 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d841f596-a274-4109-b899-76f3029098a8-audit-dir\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.342049 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.342161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.343369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.343516 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d841f596-a274-4109-b899-76f3029098a8-audit-policies\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.346776 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.348017 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.348816 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.348936 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-login\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.348958 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.349491 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-user-template-error\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.349588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-session\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.350392 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d841f596-a274-4109-b899-76f3029098a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.370124 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4l9\" (UniqueName: \"kubernetes.io/projected/d841f596-a274-4109-b899-76f3029098a8-kube-api-access-sf4l9\") pod \"oauth-openshift-5dcd86cbbd-5tshm\" (UID: \"d841f596-a274-4109-b899-76f3029098a8\") " pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:31 crc kubenswrapper[4786]: I0127 00:09:31.528934 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.022919 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm"] Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.949178 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" event={"ID":"d841f596-a274-4109-b899-76f3029098a8","Type":"ContainerStarted","Data":"a2d4644d3bfbd9a720cb3eb0e385586cc0fff395d7f523d33c8437a4e019e041"} Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.949667 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.949702 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" event={"ID":"d841f596-a274-4109-b899-76f3029098a8","Type":"ContainerStarted","Data":"682e00086779dac3cfe47acc8f3bd41872b51e242573b11b1a865ffb516a0552"} Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.960463 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" Jan 27 00:09:32 crc kubenswrapper[4786]: I0127 00:09:32.991543 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dcd86cbbd-5tshm" podStartSLOduration=28.991506966 podStartE2EDuration="28.991506966s" podCreationTimestamp="2026-01-27 00:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:32.982293217 +0000 UTC m=+218.465980300" watchObservedRunningTime="2026-01-27 00:09:32.991506966 +0000 UTC m=+218.475194039" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.124858 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.125283 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" podUID="acf2b675-3302-45db-b15e-a3ce60747252" containerName="controller-manager" containerID="cri-o://2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4" gracePeriod=30 Jan 27 00:09:37 crc kubenswrapper[4786]: E0127 00:09:37.206894 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf2b675_3302_45db_b15e_a3ce60747252.slice/crio-conmon-2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4.scope\": RecentStats: unable to find data in memory cache]" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.221346 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.221551 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" podUID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" containerName="route-controller-manager" containerID="cri-o://798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0" gracePeriod=30 Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.661790 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.667249 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750676 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca\") pod \"acf2b675-3302-45db-b15e-a3ce60747252\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750806 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca\") pod \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert\") pod \"acf2b675-3302-45db-b15e-a3ce60747252\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750911 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert\") pod \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750940 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjt5v\" (UniqueName: \"kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v\") pod \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.750968 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24lj\" (UniqueName: \"kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj\") pod \"acf2b675-3302-45db-b15e-a3ce60747252\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.751007 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles\") pod \"acf2b675-3302-45db-b15e-a3ce60747252\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.751048 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config\") pod \"acf2b675-3302-45db-b15e-a3ce60747252\" (UID: \"acf2b675-3302-45db-b15e-a3ce60747252\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.751069 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config\") pod \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\" (UID: \"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60\") " Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.751410 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca" (OuterVolumeSpecName: "client-ca") pod "acf2b675-3302-45db-b15e-a3ce60747252" (UID: "acf2b675-3302-45db-b15e-a3ce60747252"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.751436 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca" (OuterVolumeSpecName: "client-ca") pod "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" (UID: "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.752004 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config" (OuterVolumeSpecName: "config") pod "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" (UID: "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.752371 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "acf2b675-3302-45db-b15e-a3ce60747252" (UID: "acf2b675-3302-45db-b15e-a3ce60747252"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.752852 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config" (OuterVolumeSpecName: "config") pod "acf2b675-3302-45db-b15e-a3ce60747252" (UID: "acf2b675-3302-45db-b15e-a3ce60747252"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.755661 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" (UID: "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.755669 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acf2b675-3302-45db-b15e-a3ce60747252" (UID: "acf2b675-3302-45db-b15e-a3ce60747252"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.758990 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj" (OuterVolumeSpecName: "kube-api-access-v24lj") pod "acf2b675-3302-45db-b15e-a3ce60747252" (UID: "acf2b675-3302-45db-b15e-a3ce60747252"). InnerVolumeSpecName "kube-api-access-v24lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.761952 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v" (OuterVolumeSpecName: "kube-api-access-kjt5v") pod "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" (UID: "4eba73ff-9436-4d13-a0ec-a42a1cf2ad60"). InnerVolumeSpecName "kube-api-access-kjt5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852503 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852552 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf2b675-3302-45db-b15e-a3ce60747252-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852591 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852615 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjt5v\" (UniqueName: \"kubernetes.io/projected/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-kube-api-access-kjt5v\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852637 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24lj\" (UniqueName: \"kubernetes.io/projected/acf2b675-3302-45db-b15e-a3ce60747252-kube-api-access-v24lj\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852656 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852673 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852690 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.852705 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acf2b675-3302-45db-b15e-a3ce60747252-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.982494 4786 generic.go:334] "Generic (PLEG): container finished" podID="acf2b675-3302-45db-b15e-a3ce60747252" containerID="2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4" exitCode=0 Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.982748 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.983516 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" event={"ID":"acf2b675-3302-45db-b15e-a3ce60747252","Type":"ContainerDied","Data":"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4"} Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.983602 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c99bf74-8lcxj" event={"ID":"acf2b675-3302-45db-b15e-a3ce60747252","Type":"ContainerDied","Data":"2395d6a070e482d826ed9d7bfad95008c4f724be0470b7f19a115fe6f358f965"} Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.983647 4786 scope.go:117] "RemoveContainer" containerID="2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4" Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.984980 4786 generic.go:334] "Generic (PLEG): container finished" podID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" containerID="798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0" exitCode=0 Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.985029 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" event={"ID":"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60","Type":"ContainerDied","Data":"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0"} Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.985049 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" event={"ID":"4eba73ff-9436-4d13-a0ec-a42a1cf2ad60","Type":"ContainerDied","Data":"ac37ce7f0f6864a3c008a357d2b943af9f105e9b0b73e3d34c4876fd5301d7ef"} Jan 27 00:09:37 crc kubenswrapper[4786]: I0127 00:09:37.985116 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.023524 4786 scope.go:117] "RemoveContainer" containerID="2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.023682 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:38 crc kubenswrapper[4786]: E0127 00:09:38.024522 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4\": container with ID starting with 2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4 not found: ID does not exist" containerID="2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.024599 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4"} err="failed to get container status \"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4\": rpc error: code = NotFound desc = could not find container \"2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4\": container with ID starting with 2cf4dc4963c346cd4c10ed09b6b3d44c40a9574fa0e6fff6f499fb275657ffc4 not found: ID does not exist" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.024630 4786 scope.go:117] "RemoveContainer" containerID="798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.031731 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54c99bf74-8lcxj"] Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.034191 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.036691 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf6f7b8bc-88xgd"] Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.053177 4786 scope.go:117] "RemoveContainer" containerID="798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0" Jan 27 00:09:38 crc kubenswrapper[4786]: E0127 00:09:38.053657 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0\": container with ID starting with 798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0 not found: ID does not exist" containerID="798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.053701 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0"} err="failed to get container status \"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0\": rpc error: code = NotFound desc = could not find container \"798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0\": container with ID starting with 798d531fa10e56b56f09cb713caea150ecf9138470b36d7c0ea8146120b168d0 not found: ID does not exist" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.181611 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl"] Jan 27 00:09:38 crc kubenswrapper[4786]: E0127 00:09:38.183728 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf2b675-3302-45db-b15e-a3ce60747252" containerName="controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.184337 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf2b675-3302-45db-b15e-a3ce60747252" containerName="controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: E0127 00:09:38.184844 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" containerName="route-controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.185076 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" containerName="route-controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.186107 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf2b675-3302-45db-b15e-a3ce60747252" containerName="controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.186359 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" containerName="route-controller-manager" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.187120 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.222853 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl"] Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.222957 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.224695 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.225781 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.226012 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.226443 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.226479 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.231900 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.255518 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-proxy-ca-bundles\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.255863 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-client-ca\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.255904 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d5b382-ad38-46ca-ab54-38667bc44660-serving-cert\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.255927 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-config\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.255966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ppp\" (UniqueName: \"kubernetes.io/projected/16d5b382-ad38-46ca-ab54-38667bc44660-kube-api-access-l2ppp\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.357489 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d5b382-ad38-46ca-ab54-38667bc44660-serving-cert\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.357770 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-config\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.357906 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ppp\" (UniqueName: \"kubernetes.io/projected/16d5b382-ad38-46ca-ab54-38667bc44660-kube-api-access-l2ppp\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.357999 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-proxy-ca-bundles\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.358076 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-client-ca\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.359119 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-proxy-ca-bundles\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.359204 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-config\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.359707 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16d5b382-ad38-46ca-ab54-38667bc44660-client-ca\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.363971 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d5b382-ad38-46ca-ab54-38667bc44660-serving-cert\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.376855 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ppp\" (UniqueName: \"kubernetes.io/projected/16d5b382-ad38-46ca-ab54-38667bc44660-kube-api-access-l2ppp\") pod \"controller-manager-86f6cb8c5b-w7hfl\" (UID: \"16d5b382-ad38-46ca-ab54-38667bc44660\") " pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:38 crc kubenswrapper[4786]: I0127 00:09:38.580609 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.004207 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl"] Jan 27 00:09:39 crc kubenswrapper[4786]: W0127 00:09:39.014666 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d5b382_ad38_46ca_ab54_38667bc44660.slice/crio-cc49cef8db4b7d116e912600afd253047a183f9fc43869ac21afbdbf4c8d7a6c WatchSource:0}: Error finding container cc49cef8db4b7d116e912600afd253047a183f9fc43869ac21afbdbf4c8d7a6c: Status 404 returned error can't find the container with id cc49cef8db4b7d116e912600afd253047a183f9fc43869ac21afbdbf4c8d7a6c Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.153850 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eba73ff-9436-4d13-a0ec-a42a1cf2ad60" path="/var/lib/kubelet/pods/4eba73ff-9436-4d13-a0ec-a42a1cf2ad60/volumes" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.154798 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf2b675-3302-45db-b15e-a3ce60747252" path="/var/lib/kubelet/pods/acf2b675-3302-45db-b15e-a3ce60747252/volumes" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.179125 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz"] Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.179828 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.181673 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.182388 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.182464 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.182545 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.182879 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.186678 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.191884 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz"] Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.272834 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-client-ca\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.272882 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzlt\" (UniqueName: \"kubernetes.io/projected/71f2c24c-c5d1-4393-b4fb-03892940dad6-kube-api-access-5zzlt\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.272907 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-config\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.272967 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f2c24c-c5d1-4393-b4fb-03892940dad6-serving-cert\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.374377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzlt\" (UniqueName: \"kubernetes.io/projected/71f2c24c-c5d1-4393-b4fb-03892940dad6-kube-api-access-5zzlt\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.374916 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-config\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.374963 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f2c24c-c5d1-4393-b4fb-03892940dad6-serving-cert\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.375010 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-client-ca\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.376083 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-client-ca\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.377022 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f2c24c-c5d1-4393-b4fb-03892940dad6-config\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.389520 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f2c24c-c5d1-4393-b4fb-03892940dad6-serving-cert\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.409329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzlt\" (UniqueName: \"kubernetes.io/projected/71f2c24c-c5d1-4393-b4fb-03892940dad6-kube-api-access-5zzlt\") pod \"route-controller-manager-7cbff4d849-jrhdz\" (UID: \"71f2c24c-c5d1-4393-b4fb-03892940dad6\") " pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.509685 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:39 crc kubenswrapper[4786]: I0127 00:09:39.918862 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz"] Jan 27 00:09:39 crc kubenswrapper[4786]: W0127 00:09:39.930885 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71f2c24c_c5d1_4393_b4fb_03892940dad6.slice/crio-05f2aa292696cbff9770ae63586b205cc1ceb01a8d097ef578d783316f198d93 WatchSource:0}: Error finding container 05f2aa292696cbff9770ae63586b205cc1ceb01a8d097ef578d783316f198d93: Status 404 returned error can't find the container with id 05f2aa292696cbff9770ae63586b205cc1ceb01a8d097ef578d783316f198d93 Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.000331 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" event={"ID":"71f2c24c-c5d1-4393-b4fb-03892940dad6","Type":"ContainerStarted","Data":"05f2aa292696cbff9770ae63586b205cc1ceb01a8d097ef578d783316f198d93"} Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.001517 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" event={"ID":"16d5b382-ad38-46ca-ab54-38667bc44660","Type":"ContainerStarted","Data":"56231a1a0d3b52dd84db8d4598e5dc8df267a8bcc5ad6c7fefbcc5709d59d9a4"} Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.001581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" event={"ID":"16d5b382-ad38-46ca-ab54-38667bc44660","Type":"ContainerStarted","Data":"cc49cef8db4b7d116e912600afd253047a183f9fc43869ac21afbdbf4c8d7a6c"} Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.001907 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.006622 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" Jan 27 00:09:40 crc kubenswrapper[4786]: I0127 00:09:40.024221 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86f6cb8c5b-w7hfl" podStartSLOduration=3.024200308 podStartE2EDuration="3.024200308s" podCreationTimestamp="2026-01-27 00:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:40.020559068 +0000 UTC m=+225.504246111" watchObservedRunningTime="2026-01-27 00:09:40.024200308 +0000 UTC m=+225.507887351" Jan 27 00:09:41 crc kubenswrapper[4786]: I0127 00:09:41.009449 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" event={"ID":"71f2c24c-c5d1-4393-b4fb-03892940dad6","Type":"ContainerStarted","Data":"8ae4e100934d7e2ed4d76ba190dbd89fb0788234e1de48544c83e328144073eb"} Jan 27 00:09:41 crc kubenswrapper[4786]: I0127 00:09:41.009915 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:41 crc kubenswrapper[4786]: I0127 00:09:41.017165 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" Jan 27 00:09:41 crc kubenswrapper[4786]: I0127 00:09:41.026770 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbff4d849-jrhdz" podStartSLOduration=4.026751291 podStartE2EDuration="4.026751291s" podCreationTimestamp="2026-01-27 00:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:41.026355771 +0000 UTC m=+226.510042814" watchObservedRunningTime="2026-01-27 00:09:41.026751291 +0000 UTC m=+226.510438344" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.344476 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.345253 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sgprl" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="registry-server" containerID="cri-o://9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b" gracePeriod=30 Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.354466 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.354708 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-npglr" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="registry-server" containerID="cri-o://cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d" gracePeriod=30 Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.364972 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.365171 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" containerID="cri-o://a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684" gracePeriod=30 Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.375755 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.376068 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wgj2x" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="registry-server" containerID="cri-o://98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f" gracePeriod=30 Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.380621 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.380834 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4nmjh" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="registry-server" containerID="cri-o://7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847" gracePeriod=30 Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.387827 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-znvgc"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.388487 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.401058 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-znvgc"] Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.529569 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5m4\" (UniqueName: \"kubernetes.io/projected/d913b840-845c-4058-a37b-483f582f6ec6-kube-api-access-ft5m4\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.529899 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.529955 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.631431 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.631713 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.631837 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5m4\" (UniqueName: \"kubernetes.io/projected/d913b840-845c-4058-a37b-483f582f6ec6-kube-api-access-ft5m4\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.632865 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.652928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d913b840-845c-4058-a37b-483f582f6ec6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.657282 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5m4\" (UniqueName: \"kubernetes.io/projected/d913b840-845c-4058-a37b-483f582f6ec6-kube-api-access-ft5m4\") pod \"marketplace-operator-79b997595-znvgc\" (UID: \"d913b840-845c-4058-a37b-483f582f6ec6\") " pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.730140 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.780457 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.834872 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wftcz\" (UniqueName: \"kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz\") pod \"9f8d3483-9a00-4a11-88ad-d649483608a5\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.834936 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities\") pod \"9f8d3483-9a00-4a11-88ad-d649483608a5\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.834997 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content\") pod \"9f8d3483-9a00-4a11-88ad-d649483608a5\" (UID: \"9f8d3483-9a00-4a11-88ad-d649483608a5\") " Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.840575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz" (OuterVolumeSpecName: "kube-api-access-wftcz") pod "9f8d3483-9a00-4a11-88ad-d649483608a5" (UID: "9f8d3483-9a00-4a11-88ad-d649483608a5"). InnerVolumeSpecName "kube-api-access-wftcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.841308 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities" (OuterVolumeSpecName: "utilities") pod "9f8d3483-9a00-4a11-88ad-d649483608a5" (UID: "9f8d3483-9a00-4a11-88ad-d649483608a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.910592 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f8d3483-9a00-4a11-88ad-d649483608a5" (UID: "9f8d3483-9a00-4a11-88ad-d649483608a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.949394 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wftcz\" (UniqueName: \"kubernetes.io/projected/9f8d3483-9a00-4a11-88ad-d649483608a5-kube-api-access-wftcz\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.949438 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.949452 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f8d3483-9a00-4a11-88ad-d649483608a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.988513 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.989690 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:09:43 crc kubenswrapper[4786]: I0127 00:09:43.995878 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.025145 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.045487 4786 generic.go:334] "Generic (PLEG): container finished" podID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerID="9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b" exitCode=0 Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.045594 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerDied","Data":"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.045624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgprl" event={"ID":"9f8d3483-9a00-4a11-88ad-d649483608a5","Type":"ContainerDied","Data":"5d398bf3101daf570310357e9a1fc68d0e71b0ddbea69bcd05bad7fefd83e127"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.045643 4786 scope.go:117] "RemoveContainer" containerID="9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.045787 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgprl" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.067137 4786 generic.go:334] "Generic (PLEG): container finished" podID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerID="a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684" exitCode=0 Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.068344 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.068786 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" event={"ID":"3ac94253-4c9b-4dbf-83a5-e582349bbac5","Type":"ContainerDied","Data":"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.068824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rgfxb" event={"ID":"3ac94253-4c9b-4dbf-83a5-e582349bbac5","Type":"ContainerDied","Data":"3015ee4f19c38045ff2da75c878ec3848baa02b67192a6eea341280ca7a32689"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.071810 4786 generic.go:334] "Generic (PLEG): container finished" podID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerID="98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f" exitCode=0 Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.071876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerDied","Data":"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.071907 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgj2x" event={"ID":"6930398b-226e-4cc6-8fbe-5ff39cbe5bab","Type":"ContainerDied","Data":"021db0784f6a4a40fb291f058c397acaf526785afc67e549d1ff4e87fb807b05"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.071977 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgj2x" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.075605 4786 generic.go:334] "Generic (PLEG): container finished" podID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerID="7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847" exitCode=0 Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.075663 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerDied","Data":"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.075690 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nmjh" event={"ID":"08a49023-4d08-4aa5-9e39-c8c0aad82dbf","Type":"ContainerDied","Data":"92b7996bb920075896dd8f113b4eb4f9fffa02613b61d56229753aa18a7b3d3d"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.075747 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nmjh" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.077558 4786 scope.go:117] "RemoveContainer" containerID="fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.079204 4786 generic.go:334] "Generic (PLEG): container finished" podID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerID="cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d" exitCode=0 Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.079241 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerDied","Data":"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.079268 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npglr" event={"ID":"69fe5646-e88c-4b2e-9808-6550f7d9947c","Type":"ContainerDied","Data":"b759aabebe80cae39165fb66ba8b483a4c87c4dc9700df2b56d47cd33eacf256"} Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.079332 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npglr" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.111156 4786 scope.go:117] "RemoveContainer" containerID="b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.119136 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.122021 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sgprl"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.133383 4786 scope.go:117] "RemoveContainer" containerID="9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.134163 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b\": container with ID starting with 9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b not found: ID does not exist" containerID="9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.134205 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b"} err="failed to get container status \"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b\": rpc error: code = NotFound desc = could not find container \"9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b\": container with ID starting with 9cde512cfda79e8f1917e4acb8477218910465bc854df2b63ed3d21c7e07d01b not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.134235 4786 scope.go:117] "RemoveContainer" containerID="fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.134657 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68\": container with ID starting with fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68 not found: ID does not exist" containerID="fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.134696 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68"} err="failed to get container status \"fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68\": rpc error: code = NotFound desc = could not find container \"fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68\": container with ID starting with fc4e57cc44938fb4e31daba96756c5050e2cc0a0d7133ea1f49a73ebceb60f68 not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.134717 4786 scope.go:117] "RemoveContainer" containerID="b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.135046 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8\": container with ID starting with b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8 not found: ID does not exist" containerID="b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.135075 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8"} err="failed to get container status \"b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8\": rpc error: code = NotFound desc = could not find container \"b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8\": container with ID starting with b2aede7d0834dfc3ac15c8060a25e2cc7d43e7340692ebb0735764dab585bac8 not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.135094 4786 scope.go:117] "RemoveContainer" containerID="a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.147804 4786 scope.go:117] "RemoveContainer" containerID="a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.148369 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684\": container with ID starting with a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684 not found: ID does not exist" containerID="a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.148429 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684"} err="failed to get container status \"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684\": rpc error: code = NotFound desc = could not find container \"a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684\": container with ID starting with a048ba6085cb747d797acbf344610fd8687cad5816685542c58f0cf46bcbc684 not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.148456 4786 scope.go:117] "RemoveContainer" containerID="98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjr7f\" (UniqueName: \"kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f\") pod \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153057 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca\") pod \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153081 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbvzv\" (UniqueName: \"kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv\") pod \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153131 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content\") pod \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153161 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities\") pod \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153203 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k9wl\" (UniqueName: \"kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl\") pod \"69fe5646-e88c-4b2e-9808-6550f7d9947c\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153238 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities\") pod \"69fe5646-e88c-4b2e-9808-6550f7d9947c\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153273 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content\") pod \"69fe5646-e88c-4b2e-9808-6550f7d9947c\" (UID: \"69fe5646-e88c-4b2e-9808-6550f7d9947c\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153315 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqvvl\" (UniqueName: \"kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl\") pod \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153347 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content\") pod \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\" (UID: \"08a49023-4d08-4aa5-9e39-c8c0aad82dbf\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153377 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics\") pod \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\" (UID: \"3ac94253-4c9b-4dbf-83a5-e582349bbac5\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.153403 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities\") pod \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\" (UID: \"6930398b-226e-4cc6-8fbe-5ff39cbe5bab\") " Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.155690 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities" (OuterVolumeSpecName: "utilities") pod "6930398b-226e-4cc6-8fbe-5ff39cbe5bab" (UID: "6930398b-226e-4cc6-8fbe-5ff39cbe5bab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.155928 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities" (OuterVolumeSpecName: "utilities") pod "69fe5646-e88c-4b2e-9808-6550f7d9947c" (UID: "69fe5646-e88c-4b2e-9808-6550f7d9947c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.156329 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities" (OuterVolumeSpecName: "utilities") pod "08a49023-4d08-4aa5-9e39-c8c0aad82dbf" (UID: "08a49023-4d08-4aa5-9e39-c8c0aad82dbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.158869 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3ac94253-4c9b-4dbf-83a5-e582349bbac5" (UID: "3ac94253-4c9b-4dbf-83a5-e582349bbac5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.159261 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv" (OuterVolumeSpecName: "kube-api-access-wbvzv") pod "08a49023-4d08-4aa5-9e39-c8c0aad82dbf" (UID: "08a49023-4d08-4aa5-9e39-c8c0aad82dbf"). InnerVolumeSpecName "kube-api-access-wbvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.159422 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl" (OuterVolumeSpecName: "kube-api-access-9k9wl") pod "69fe5646-e88c-4b2e-9808-6550f7d9947c" (UID: "69fe5646-e88c-4b2e-9808-6550f7d9947c"). InnerVolumeSpecName "kube-api-access-9k9wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.159979 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3ac94253-4c9b-4dbf-83a5-e582349bbac5" (UID: "3ac94253-4c9b-4dbf-83a5-e582349bbac5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.160979 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f" (OuterVolumeSpecName: "kube-api-access-jjr7f") pod "6930398b-226e-4cc6-8fbe-5ff39cbe5bab" (UID: "6930398b-226e-4cc6-8fbe-5ff39cbe5bab"). InnerVolumeSpecName "kube-api-access-jjr7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.161499 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl" (OuterVolumeSpecName: "kube-api-access-vqvvl") pod "3ac94253-4c9b-4dbf-83a5-e582349bbac5" (UID: "3ac94253-4c9b-4dbf-83a5-e582349bbac5"). InnerVolumeSpecName "kube-api-access-vqvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.169447 4786 scope.go:117] "RemoveContainer" containerID="4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.182345 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6930398b-226e-4cc6-8fbe-5ff39cbe5bab" (UID: "6930398b-226e-4cc6-8fbe-5ff39cbe5bab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.184510 4786 scope.go:117] "RemoveContainer" containerID="30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.200834 4786 scope.go:117] "RemoveContainer" containerID="98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.201360 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f\": container with ID starting with 98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f not found: ID does not exist" containerID="98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.201519 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f"} err="failed to get container status \"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f\": rpc error: code = NotFound desc = could not find container \"98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f\": container with ID starting with 98ff8c2fa4dd2f415a2415b62a05b98adc4ae8eddbc663d56e76cdb041b0373f not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.201558 4786 scope.go:117] "RemoveContainer" containerID="4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.202960 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045\": container with ID starting with 4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045 not found: ID does not exist" containerID="4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.202996 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045"} err="failed to get container status \"4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045\": rpc error: code = NotFound desc = could not find container \"4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045\": container with ID starting with 4bc3cf754157d68113ab418d21f300681ca578d0cfff8d610245c0209d8f8045 not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.203019 4786 scope.go:117] "RemoveContainer" containerID="30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.203367 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b\": container with ID starting with 30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b not found: ID does not exist" containerID="30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.203391 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b"} err="failed to get container status \"30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b\": rpc error: code = NotFound desc = could not find container \"30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b\": container with ID starting with 30645859d7aa5bcb285da858cb268fa87192b68daa50e0dbc44220a2fcef8d4b not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.203406 4786 scope.go:117] "RemoveContainer" containerID="7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.224427 4786 scope.go:117] "RemoveContainer" containerID="e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.235343 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69fe5646-e88c-4b2e-9808-6550f7d9947c" (UID: "69fe5646-e88c-4b2e-9808-6550f7d9947c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.239742 4786 scope.go:117] "RemoveContainer" containerID="64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.257402 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqvvl\" (UniqueName: \"kubernetes.io/projected/3ac94253-4c9b-4dbf-83a5-e582349bbac5-kube-api-access-vqvvl\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.257721 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.257807 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.257896 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjr7f\" (UniqueName: \"kubernetes.io/projected/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-kube-api-access-jjr7f\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258019 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac94253-4c9b-4dbf-83a5-e582349bbac5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258089 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbvzv\" (UniqueName: \"kubernetes.io/projected/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-kube-api-access-wbvzv\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258176 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6930398b-226e-4cc6-8fbe-5ff39cbe5bab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258249 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258316 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k9wl\" (UniqueName: \"kubernetes.io/projected/69fe5646-e88c-4b2e-9808-6550f7d9947c-kube-api-access-9k9wl\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258381 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258447 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fe5646-e88c-4b2e-9808-6550f7d9947c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.258175 4786 scope.go:117] "RemoveContainer" containerID="7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.259085 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847\": container with ID starting with 7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847 not found: ID does not exist" containerID="7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.259148 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847"} err="failed to get container status \"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847\": rpc error: code = NotFound desc = could not find container \"7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847\": container with ID starting with 7402d0693220bd6ca372236cb0f1ed21dfd9836bee274bc888e3e76a3c41a847 not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.259181 4786 scope.go:117] "RemoveContainer" containerID="e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.259610 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b\": container with ID starting with e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b not found: ID does not exist" containerID="e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.259733 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b"} err="failed to get container status \"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b\": rpc error: code = NotFound desc = could not find container \"e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b\": container with ID starting with e05943eeb86aecba00332dd766faa153bfffd240a701bd268bd1d433eec3e36b not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.259838 4786 scope.go:117] "RemoveContainer" containerID="64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.260172 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d\": container with ID starting with 64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d not found: ID does not exist" containerID="64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.260197 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d"} err="failed to get container status \"64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d\": rpc error: code = NotFound desc = could not find container \"64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d\": container with ID starting with 64ed72ce37236a01f37e4097767ce7ffebee9603ee0071d330a2031b94440f8d not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.260269 4786 scope.go:117] "RemoveContainer" containerID="cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.281881 4786 scope.go:117] "RemoveContainer" containerID="1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.287825 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08a49023-4d08-4aa5-9e39-c8c0aad82dbf" (UID: "08a49023-4d08-4aa5-9e39-c8c0aad82dbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.302084 4786 scope.go:117] "RemoveContainer" containerID="c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.319487 4786 scope.go:117] "RemoveContainer" containerID="cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.320203 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d\": container with ID starting with cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d not found: ID does not exist" containerID="cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.320246 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d"} err="failed to get container status \"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d\": rpc error: code = NotFound desc = could not find container \"cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d\": container with ID starting with cd263e3fc716c0727dafc94dfe88ef30225dbae668df007ab07fb56d2ec39e2d not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.320288 4786 scope.go:117] "RemoveContainer" containerID="1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.321713 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b\": container with ID starting with 1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b not found: ID does not exist" containerID="1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.321739 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b"} err="failed to get container status \"1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b\": rpc error: code = NotFound desc = could not find container \"1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b\": container with ID starting with 1d1d7ed36bd9f65cf1cfe0987fba4588706e1b69f2eda1095e977b4ec775f97b not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.321756 4786 scope.go:117] "RemoveContainer" containerID="c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c" Jan 27 00:09:44 crc kubenswrapper[4786]: E0127 00:09:44.322096 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c\": container with ID starting with c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c not found: ID does not exist" containerID="c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.322136 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c"} err="failed to get container status \"c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c\": rpc error: code = NotFound desc = could not find container \"c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c\": container with ID starting with c55267e84b876c3b46440db02bc54181f0f035bfe3b1c525c8570c259185051c not found: ID does not exist" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.332824 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-znvgc"] Jan 27 00:09:44 crc kubenswrapper[4786]: W0127 00:09:44.338386 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd913b840_845c_4058_a37b_483f582f6ec6.slice/crio-aa9859cf9890ba8e8061b241549c9cc35a0726b026a4eaa706e6756baeeb2dcf WatchSource:0}: Error finding container aa9859cf9890ba8e8061b241549c9cc35a0726b026a4eaa706e6756baeeb2dcf: Status 404 returned error can't find the container with id aa9859cf9890ba8e8061b241549c9cc35a0726b026a4eaa706e6756baeeb2dcf Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.359765 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08a49023-4d08-4aa5-9e39-c8c0aad82dbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.430993 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.434158 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4nmjh"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.447692 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.450776 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-npglr"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.464474 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.469737 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgj2x"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.475132 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:09:44 crc kubenswrapper[4786]: I0127 00:09:44.479095 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rgfxb"] Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.086242 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" event={"ID":"d913b840-845c-4058-a37b-483f582f6ec6","Type":"ContainerStarted","Data":"fbeea36c3cb02d8fa3533245b5e5def8438e74f9b8f0621d603a130206e9a87d"} Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.086643 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.086669 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" event={"ID":"d913b840-845c-4058-a37b-483f582f6ec6","Type":"ContainerStarted","Data":"aa9859cf9890ba8e8061b241549c9cc35a0726b026a4eaa706e6756baeeb2dcf"} Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.089841 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.106156 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-znvgc" podStartSLOduration=2.106133187 podStartE2EDuration="2.106133187s" podCreationTimestamp="2026-01-27 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:09:45.101274236 +0000 UTC m=+230.584961279" watchObservedRunningTime="2026-01-27 00:09:45.106133187 +0000 UTC m=+230.589820230" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.180898 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" path="/var/lib/kubelet/pods/08a49023-4d08-4aa5-9e39-c8c0aad82dbf/volumes" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.181768 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" path="/var/lib/kubelet/pods/3ac94253-4c9b-4dbf-83a5-e582349bbac5/volumes" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.182195 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" path="/var/lib/kubelet/pods/6930398b-226e-4cc6-8fbe-5ff39cbe5bab/volumes" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.183274 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" path="/var/lib/kubelet/pods/69fe5646-e88c-4b2e-9808-6550f7d9947c/volumes" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.183823 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" path="/var/lib/kubelet/pods/9f8d3483-9a00-4a11-88ad-d649483608a5/volumes" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617172 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5dj7f"] Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617400 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617416 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617428 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617435 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617445 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617453 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617465 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617472 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617480 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617487 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617496 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617505 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617515 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617522 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="extract-content" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617531 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617538 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617551 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617557 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617567 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617578 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617645 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617654 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617662 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617669 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="extract-utilities" Jan 27 00:09:45 crc kubenswrapper[4786]: E0127 00:09:45.617680 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617688 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617803 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8d3483-9a00-4a11-88ad-d649483608a5" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617823 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6930398b-226e-4cc6-8fbe-5ff39cbe5bab" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617834 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fe5646-e88c-4b2e-9808-6550f7d9947c" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617845 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac94253-4c9b-4dbf-83a5-e582349bbac5" containerName="marketplace-operator" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.617858 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a49023-4d08-4aa5-9e39-c8c0aad82dbf" containerName="registry-server" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.618692 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.620956 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.640126 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5dj7f"] Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.776998 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kvr\" (UniqueName: \"kubernetes.io/projected/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-kube-api-access-69kvr\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.777085 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-catalog-content\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.777111 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-utilities\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.877800 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-catalog-content\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.877843 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-utilities\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.877875 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kvr\" (UniqueName: \"kubernetes.io/projected/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-kube-api-access-69kvr\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.878412 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-utilities\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.878712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-catalog-content\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.895693 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kvr\" (UniqueName: \"kubernetes.io/projected/6adbf4f2-5d00-4957-b9c0-7a68e3a5d184-kube-api-access-69kvr\") pod \"certified-operators-5dj7f\" (UID: \"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184\") " pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:45 crc kubenswrapper[4786]: I0127 00:09:45.946820 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.373864 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5dj7f"] Jan 27 00:09:46 crc kubenswrapper[4786]: W0127 00:09:46.380007 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adbf4f2_5d00_4957_b9c0_7a68e3a5d184.slice/crio-837a0140b42e57ad3c48c0aaf8b2d1ab4e61c2633c1481a0481f1bc579a3ea15 WatchSource:0}: Error finding container 837a0140b42e57ad3c48c0aaf8b2d1ab4e61c2633c1481a0481f1bc579a3ea15: Status 404 returned error can't find the container with id 837a0140b42e57ad3c48c0aaf8b2d1ab4e61c2633c1481a0481f1bc579a3ea15 Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.621199 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.623638 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.626826 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.629106 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.688289 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.688408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.688454 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtgk\" (UniqueName: \"kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.789143 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.789198 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wtgk\" (UniqueName: \"kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.789234 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.789704 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.789877 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.809178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wtgk\" (UniqueName: \"kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk\") pod \"redhat-marketplace-w7dqt\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:46 crc kubenswrapper[4786]: I0127 00:09:46.951385 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:47 crc kubenswrapper[4786]: I0127 00:09:47.121015 4786 generic.go:334] "Generic (PLEG): container finished" podID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" containerID="07f3de99eb9cccf362f116fd41e606e6f89f1972f3e80a58d0a349ea4cb1b925" exitCode=0 Jan 27 00:09:47 crc kubenswrapper[4786]: I0127 00:09:47.122099 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5dj7f" event={"ID":"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184","Type":"ContainerDied","Data":"07f3de99eb9cccf362f116fd41e606e6f89f1972f3e80a58d0a349ea4cb1b925"} Jan 27 00:09:47 crc kubenswrapper[4786]: I0127 00:09:47.122130 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5dj7f" event={"ID":"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184","Type":"ContainerStarted","Data":"837a0140b42e57ad3c48c0aaf8b2d1ab4e61c2633c1481a0481f1bc579a3ea15"} Jan 27 00:09:47 crc kubenswrapper[4786]: I0127 00:09:47.372829 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.022995 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78lb5"] Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.024488 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.026553 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.034096 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78lb5"] Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.102958 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs6r\" (UniqueName: \"kubernetes.io/projected/cd63f298-237f-4100-8f4e-9838b123763f-kube-api-access-pjs6r\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.103012 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-catalog-content\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.103045 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-utilities\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.127958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerDied","Data":"878d04614e159ec2833d9ef0ed0aa2c25b719c9dd086c3e8f40228a91f378117"} Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.128953 4786 generic.go:334] "Generic (PLEG): container finished" podID="17299418-877c-4c6d-9473-2bbb4319ac07" containerID="878d04614e159ec2833d9ef0ed0aa2c25b719c9dd086c3e8f40228a91f378117" exitCode=0 Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.129027 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerStarted","Data":"be58f3e4fcff93c0da1de8c8da6880b45928fdaed0a4e8001b77f91e0f435a2b"} Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.205439 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs6r\" (UniqueName: \"kubernetes.io/projected/cd63f298-237f-4100-8f4e-9838b123763f-kube-api-access-pjs6r\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.205504 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-catalog-content\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.205610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-utilities\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.207943 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-catalog-content\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.207951 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd63f298-237f-4100-8f4e-9838b123763f-utilities\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.237716 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs6r\" (UniqueName: \"kubernetes.io/projected/cd63f298-237f-4100-8f4e-9838b123763f-kube-api-access-pjs6r\") pod \"redhat-operators-78lb5\" (UID: \"cd63f298-237f-4100-8f4e-9838b123763f\") " pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.349606 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:48 crc kubenswrapper[4786]: I0127 00:09:48.826215 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78lb5"] Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.017905 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrcmh"] Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.019003 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.020946 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.029881 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcmh"] Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.118452 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwznt\" (UniqueName: \"kubernetes.io/projected/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-kube-api-access-rwznt\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.118542 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-utilities\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.118723 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-catalog-content\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.138654 4786 generic.go:334] "Generic (PLEG): container finished" podID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" containerID="60a72f0a2126a5a056b90831fc0a41fdaa77220046be399d7b63d2bed2a29d56" exitCode=0 Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.138753 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5dj7f" event={"ID":"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184","Type":"ContainerDied","Data":"60a72f0a2126a5a056b90831fc0a41fdaa77220046be399d7b63d2bed2a29d56"} Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.140712 4786 generic.go:334] "Generic (PLEG): container finished" podID="cd63f298-237f-4100-8f4e-9838b123763f" containerID="d47ff370574b6a4fdd917dba954804462d144eb9be19d5863d38ffe9f32d03b3" exitCode=0 Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.140733 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78lb5" event={"ID":"cd63f298-237f-4100-8f4e-9838b123763f","Type":"ContainerDied","Data":"d47ff370574b6a4fdd917dba954804462d144eb9be19d5863d38ffe9f32d03b3"} Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.141230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78lb5" event={"ID":"cd63f298-237f-4100-8f4e-9838b123763f","Type":"ContainerStarted","Data":"f88397b0f99c566475f45b3768f7de127717d8b92328effd11cbb5011da79bef"} Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.148000 4786 generic.go:334] "Generic (PLEG): container finished" podID="17299418-877c-4c6d-9473-2bbb4319ac07" containerID="0ae7c4f36f192b0b9cc217e75d51628a75028608334977351779ad82c71bd706" exitCode=0 Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.162643 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerDied","Data":"0ae7c4f36f192b0b9cc217e75d51628a75028608334977351779ad82c71bd706"} Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.220017 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwznt\" (UniqueName: \"kubernetes.io/projected/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-kube-api-access-rwznt\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.220083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-utilities\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.220103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-catalog-content\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.222505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-utilities\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.222741 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-catalog-content\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.242371 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwznt\" (UniqueName: \"kubernetes.io/projected/321f7da5-39cd-4cd5-a102-4ea98ed4a6c2-kube-api-access-rwznt\") pod \"community-operators-qrcmh\" (UID: \"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2\") " pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.482869 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:49 crc kubenswrapper[4786]: I0127 00:09:49.916641 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcmh"] Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.154907 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5dj7f" event={"ID":"6adbf4f2-5d00-4957-b9c0-7a68e3a5d184","Type":"ContainerStarted","Data":"ce48de8ceb7179ce67794c1c0f90119e8f2a09af01e7dbfed50f03bdcf72e6aa"} Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.157049 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78lb5" event={"ID":"cd63f298-237f-4100-8f4e-9838b123763f","Type":"ContainerStarted","Data":"3389a2ef5a98f93623605ce3242079f2450888581a98541fd7e10cf2f40a8483"} Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.158825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerStarted","Data":"45c6ca1de6fd233a8f07ef580da48bb87961c7a6e0703b528b41290f33c4a4a6"} Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.160121 4786 generic.go:334] "Generic (PLEG): container finished" podID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" containerID="d8f605611a58458703cb35fc49d9b8f0cacd119081ff211866c335f4d5b82429" exitCode=0 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.160157 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcmh" event={"ID":"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2","Type":"ContainerDied","Data":"d8f605611a58458703cb35fc49d9b8f0cacd119081ff211866c335f4d5b82429"} Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.160217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcmh" event={"ID":"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2","Type":"ContainerStarted","Data":"6d000a156d4f2da18cec3ad60dccbca83dc408ff4a295f21249335c161753512"} Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.175443 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5dj7f" podStartSLOduration=2.7663993319999998 podStartE2EDuration="5.175430149s" podCreationTimestamp="2026-01-27 00:09:45 +0000 UTC" firstStartedPulling="2026-01-27 00:09:47.123832492 +0000 UTC m=+232.607519535" lastFinishedPulling="2026-01-27 00:09:49.532863309 +0000 UTC m=+235.016550352" observedRunningTime="2026-01-27 00:09:50.174134626 +0000 UTC m=+235.657821669" watchObservedRunningTime="2026-01-27 00:09:50.175430149 +0000 UTC m=+235.659117192" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.221703 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7dqt" podStartSLOduration=2.782090541 podStartE2EDuration="4.221686758s" podCreationTimestamp="2026-01-27 00:09:46 +0000 UTC" firstStartedPulling="2026-01-27 00:09:48.130433205 +0000 UTC m=+233.614120238" lastFinishedPulling="2026-01-27 00:09:49.570029422 +0000 UTC m=+235.053716455" observedRunningTime="2026-01-27 00:09:50.220139619 +0000 UTC m=+235.703826662" watchObservedRunningTime="2026-01-27 00:09:50.221686758 +0000 UTC m=+235.705373801" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.504205 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.505295 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.505436 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.505798 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965" gracePeriod=15 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.505931 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1" gracePeriod=15 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506006 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b" gracePeriod=15 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506015 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0" gracePeriod=15 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506391 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54" gracePeriod=15 Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506465 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506705 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506718 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506728 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506733 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506744 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506749 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506756 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506761 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506771 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506777 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506789 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506796 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.506804 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506810 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506894 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506902 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506910 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506919 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506932 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.506939 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.555804 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641110 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641148 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641214 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641274 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641294 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641312 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.641340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742233 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742305 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742351 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742367 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742370 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742393 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742433 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742438 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742466 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742496 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742613 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742636 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742692 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742729 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742816 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.742824 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: I0127 00:09:50.853941 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:09:50 crc kubenswrapper[4786]: W0127 00:09:50.879349 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3ec99a2207caaf8a15cebfc4100329e656b5c609d7592a25f64e629d267695c3 WatchSource:0}: Error finding container 3ec99a2207caaf8a15cebfc4100329e656b5c609d7592a25f64e629d267695c3: Status 404 returned error can't find the container with id 3ec99a2207caaf8a15cebfc4100329e656b5c609d7592a25f64e629d267695c3 Jan 27 00:09:50 crc kubenswrapper[4786]: E0127 00:09:50.883056 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6df2434e22d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 00:09:50.881891026 +0000 UTC m=+236.365578069,LastTimestamp:2026-01-27 00:09:50.881891026 +0000 UTC m=+236.365578069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.167615 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcmh" event={"ID":"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2","Type":"ContainerStarted","Data":"bc67dd18352a0c4e0e4e11c93f142f59d1c31c170f3fdabab61e7a227fd2b9b7"} Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.168623 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.169060 4786 generic.go:334] "Generic (PLEG): container finished" podID="cd63f298-237f-4100-8f4e-9838b123763f" containerID="3389a2ef5a98f93623605ce3242079f2450888581a98541fd7e10cf2f40a8483" exitCode=0 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.169049 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.169108 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78lb5" event={"ID":"cd63f298-237f-4100-8f4e-9838b123763f","Type":"ContainerDied","Data":"3389a2ef5a98f93623605ce3242079f2450888581a98541fd7e10cf2f40a8483"} Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.169794 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170026 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170148 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7d717143c2199807ab2b2cc149bc478b89eba84d396d50f4812d1873e9d7671a"} Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170170 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3ec99a2207caaf8a15cebfc4100329e656b5c609d7592a25f64e629d267695c3"} Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170307 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170740 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.170980 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.171253 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.172158 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.174776 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.175385 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b" exitCode=0 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.175400 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0" exitCode=0 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.175407 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54" exitCode=0 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.175414 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1" exitCode=2 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.175488 4786 scope.go:117] "RemoveContainer" containerID="d271eff48c2ee18aa8f7d11c407d71006b9c6b75f31841a8a85875cdc270c50f" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.176951 4786 generic.go:334] "Generic (PLEG): container finished" podID="25c53e44-d237-4830-a5f2-f4f5fdc62320" containerID="aab15098eaab48b2fbc6b939fd2e5bc05790d6a0ed5db5b21adf9ee7827458e2" exitCode=0 Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.177101 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25c53e44-d237-4830-a5f2-f4f5fdc62320","Type":"ContainerDied","Data":"aab15098eaab48b2fbc6b939fd2e5bc05790d6a0ed5db5b21adf9ee7827458e2"} Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.177725 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.178047 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.178295 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:51 crc kubenswrapper[4786]: I0127 00:09:51.178472 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.189829 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.192691 4786 generic.go:334] "Generic (PLEG): container finished" podID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" containerID="bc67dd18352a0c4e0e4e11c93f142f59d1c31c170f3fdabab61e7a227fd2b9b7" exitCode=0 Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.192755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcmh" event={"ID":"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2","Type":"ContainerDied","Data":"bc67dd18352a0c4e0e4e11c93f142f59d1c31c170f3fdabab61e7a227fd2b9b7"} Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.193867 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.194287 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.194569 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.194806 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.199417 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78lb5" event={"ID":"cd63f298-237f-4100-8f4e-9838b123763f","Type":"ContainerStarted","Data":"4cf005812ff7d1a739e211b1a9b2fd887b22eb8467be096d1f2a3c3f5a2b23be"} Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.200268 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.200466 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.200752 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.202799 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.625580 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.628345 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.628678 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.629108 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.630508 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666257 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock\") pod \"25c53e44-d237-4830-a5f2-f4f5fdc62320\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666344 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access\") pod \"25c53e44-d237-4830-a5f2-f4f5fdc62320\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666360 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir\") pod \"25c53e44-d237-4830-a5f2-f4f5fdc62320\" (UID: \"25c53e44-d237-4830-a5f2-f4f5fdc62320\") " Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666439 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock" (OuterVolumeSpecName: "var-lock") pod "25c53e44-d237-4830-a5f2-f4f5fdc62320" (UID: "25c53e44-d237-4830-a5f2-f4f5fdc62320"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666591 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.666611 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "25c53e44-d237-4830-a5f2-f4f5fdc62320" (UID: "25c53e44-d237-4830-a5f2-f4f5fdc62320"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.674810 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "25c53e44-d237-4830-a5f2-f4f5fdc62320" (UID: "25c53e44-d237-4830-a5f2-f4f5fdc62320"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.767587 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25c53e44-d237-4830-a5f2-f4f5fdc62320-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:52 crc kubenswrapper[4786]: I0127 00:09:52.767913 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c53e44-d237-4830-a5f2-f4f5fdc62320-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.206495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcmh" event={"ID":"321f7da5-39cd-4cd5-a102-4ea98ed4a6c2","Type":"ContainerStarted","Data":"3fc695708bb64b3a62707fd699f67fc3ef4b8c628d960a4029b7d64d0df13fc8"} Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.207705 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.207868 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.208012 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.208179 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.210503 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.211078 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965" exitCode=0 Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.211122 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc5c95f3a53ccf50c29cac95853de3724e0e1c9444918aa3ec4fccc84d220a3" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.213161 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.217738 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25c53e44-d237-4830-a5f2-f4f5fdc62320","Type":"ContainerDied","Data":"7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a"} Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.217790 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbaac1da148166ea559adda5a55f4d0059e46a8294d7a9f915fcd61204dbd7a" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.968176 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.968761 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.969014 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.969311 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.970857 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.971806 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.972124 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.972436 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.972711 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.972912 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.973109 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982267 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982345 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982380 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982392 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982437 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982447 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982802 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982824 4786 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:53 crc kubenswrapper[4786]: I0127 00:09:53.982863 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.218160 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.230103 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.230318 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.230509 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.230819 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:54 crc kubenswrapper[4786]: I0127 00:09:54.231237 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.154225 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.159269 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.161810 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.170084 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.170224 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.173964 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.358946 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.360023 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.360466 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.360893 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.361531 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.361598 4786 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.361967 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.562449 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.947484 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.947776 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:55 crc kubenswrapper[4786]: E0127 00:09:55.963069 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.996107 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.996463 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.996700 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.996978 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.997253 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:55 crc kubenswrapper[4786]: I0127 00:09:55.997411 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.263757 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5dj7f" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.264308 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.264792 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.265338 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.265810 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.266132 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: E0127 00:09:56.548661 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6df2434e22d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 00:09:50.881891026 +0000 UTC m=+236.365578069,LastTimestamp:2026-01-27 00:09:50.881891026 +0000 UTC m=+236.365578069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 00:09:56 crc kubenswrapper[4786]: E0127 00:09:56.764278 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.951787 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.951835 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.991812 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.992539 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.993172 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.993643 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.993999 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.994245 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:56 crc kubenswrapper[4786]: I0127 00:09:56.994550 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.289334 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.289909 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.290435 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.291017 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.291329 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.291766 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:57 crc kubenswrapper[4786]: I0127 00:09:57.292147 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.350075 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.350466 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:58 crc kubenswrapper[4786]: E0127 00:09:58.365722 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.416640 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.417311 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.417884 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.418376 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.418771 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.419084 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:58 crc kubenswrapper[4786]: I0127 00:09:58.419407 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.309441 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78lb5" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.310021 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.310732 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.311358 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.311700 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.312166 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.312549 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.484035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.484091 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.568654 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.569216 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.569813 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.570487 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.571018 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.571461 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:09:59 crc kubenswrapper[4786]: I0127 00:09:59.571957 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.305911 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrcmh" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.306559 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.306896 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.307195 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.307445 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.307674 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:00 crc kubenswrapper[4786]: I0127 00:10:00.308034 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:01 crc kubenswrapper[4786]: E0127 00:10:01.566197 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="6.4s" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.147891 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.149825 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.150068 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.150254 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.152936 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.153256 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.153531 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.170792 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.170819 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:03 crc kubenswrapper[4786]: E0127 00:10:03.171157 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.171699 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:03 crc kubenswrapper[4786]: W0127 00:10:03.197414 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-60473240af5d9bf19e2355519f071497c4bcf40a7673fcbf3e853d16b33f67ae WatchSource:0}: Error finding container 60473240af5d9bf19e2355519f071497c4bcf40a7673fcbf3e853d16b33f67ae: Status 404 returned error can't find the container with id 60473240af5d9bf19e2355519f071497c4bcf40a7673fcbf3e853d16b33f67ae Jan 27 00:10:03 crc kubenswrapper[4786]: I0127 00:10:03.269833 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"60473240af5d9bf19e2355519f071497c4bcf40a7673fcbf3e853d16b33f67ae"} Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.276991 4786 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="55c590d4459c84651ddd5bea7f12edf46e6b1abb974200748df98e6d8c3f1d06" exitCode=0 Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.277110 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"55c590d4459c84651ddd5bea7f12edf46e6b1abb974200748df98e6d8c3f1d06"} Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.277274 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.277386 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:04 crc kubenswrapper[4786]: E0127 00:10:04.278311 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.278379 4786 status_manager.go:851] "Failed to get status for pod" podUID="6adbf4f2-5d00-4957-b9c0-7a68e3a5d184" pod="openshift-marketplace/certified-operators-5dj7f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5dj7f\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.279011 4786 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.279356 4786 status_manager.go:851] "Failed to get status for pod" podUID="cd63f298-237f-4100-8f4e-9838b123763f" pod="openshift-marketplace/redhat-operators-78lb5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78lb5\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.279680 4786 status_manager.go:851] "Failed to get status for pod" podUID="321f7da5-39cd-4cd5-a102-4ea98ed4a6c2" pod="openshift-marketplace/community-operators-qrcmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qrcmh\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.280056 4786 status_manager.go:851] "Failed to get status for pod" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" pod="openshift-marketplace/redhat-marketplace-w7dqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w7dqt\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:04 crc kubenswrapper[4786]: I0127 00:10:04.280835 4786 status_manager.go:851] "Failed to get status for pod" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.286701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"532e10a21e1e46dbbf0018f17c93f78f88569e9bb2443756b63b5a9dd0260c4d"} Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.287026 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a83606d9139a28e77449217f0b89571529e5792bc8f3f538158fa7c8b9434b0"} Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.287035 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5f306501a70d478756cee1db4b16ad9830adaee1a3a473451cc593b12bc19d34"} Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.287046 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"76a963fd3f8e0867f95e1d54ca9d357bff13e4f37d3eda9bf23b65a376e183ac"} Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.290115 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.290160 4786 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5" exitCode=1 Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.290187 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5"} Jan 27 00:10:05 crc kubenswrapper[4786]: I0127 00:10:05.291093 4786 scope.go:117] "RemoveContainer" containerID="7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5" Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.297179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58dfa80379ea4d16524089dee2c695c6124c5a08a1ddb4b5606381eebe6eb2c8"} Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.297532 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.297548 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.297820 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.299850 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 00:10:06 crc kubenswrapper[4786]: I0127 00:10:06.299896 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7516e0531a168bdbe0cd2f9acd60840605fb40cfac016984bd8f24090a4d0c8e"} Jan 27 00:10:08 crc kubenswrapper[4786]: I0127 00:10:08.172960 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:08 crc kubenswrapper[4786]: I0127 00:10:08.173596 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:08 crc kubenswrapper[4786]: I0127 00:10:08.181821 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:08 crc kubenswrapper[4786]: I0127 00:10:08.483361 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:10:11 crc kubenswrapper[4786]: I0127 00:10:11.323285 4786 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:11 crc kubenswrapper[4786]: I0127 00:10:11.359179 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="90bdc170-a7cf-406d-bdf0-cb46f2a7ec29" Jan 27 00:10:12 crc kubenswrapper[4786]: I0127 00:10:12.337086 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:12 crc kubenswrapper[4786]: I0127 00:10:12.337129 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:12 crc kubenswrapper[4786]: I0127 00:10:12.341309 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="90bdc170-a7cf-406d-bdf0-cb46f2a7ec29" Jan 27 00:10:12 crc kubenswrapper[4786]: I0127 00:10:12.343932 4786 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://76a963fd3f8e0867f95e1d54ca9d357bff13e4f37d3eda9bf23b65a376e183ac" Jan 27 00:10:12 crc kubenswrapper[4786]: I0127 00:10:12.343978 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:13 crc kubenswrapper[4786]: I0127 00:10:13.343536 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:13 crc kubenswrapper[4786]: I0127 00:10:13.343631 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:13 crc kubenswrapper[4786]: I0127 00:10:13.348134 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="90bdc170-a7cf-406d-bdf0-cb46f2a7ec29" Jan 27 00:10:14 crc kubenswrapper[4786]: I0127 00:10:14.971642 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:10:14 crc kubenswrapper[4786]: I0127 00:10:14.971858 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 00:10:14 crc kubenswrapper[4786]: I0127 00:10:14.971931 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 00:10:21 crc kubenswrapper[4786]: I0127 00:10:21.586649 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 00:10:21 crc kubenswrapper[4786]: I0127 00:10:21.983124 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.023114 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.259522 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.543915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.625004 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.799605 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.853420 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.861998 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.924813 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 00:10:22 crc kubenswrapper[4786]: I0127 00:10:22.940181 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.273259 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.583308 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.687091 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.754525 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.846502 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.959203 4786 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","pod25c53e44-d237-4830-a5f2-f4f5fdc62320"] err="unable to destroy cgroup paths for cgroup [kubepods pod25c53e44-d237-4830-a5f2-f4f5fdc62320] : Timed out while waiting for systemd to remove kubepods-pod25c53e44_d237_4830_a5f2_f4f5fdc62320.slice" Jan 27 00:10:23 crc kubenswrapper[4786]: I0127 00:10:23.973041 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.018449 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.180112 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.189642 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.260072 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.283161 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.288153 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.467760 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.507834 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.652306 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.676992 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.736617 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.800382 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.911380 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.924161 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.966433 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.971728 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 00:10:24 crc kubenswrapper[4786]: I0127 00:10:24.971810 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.094211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.182607 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.257055 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.436556 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.524970 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.538269 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.604265 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.638796 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.642690 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.710918 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.712148 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.765092 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.765266 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.811325 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.822987 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.825623 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.834016 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.905335 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.929347 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 00:10:25 crc kubenswrapper[4786]: I0127 00:10:25.935005 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.010248 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.085375 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.162938 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.170907 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.192487 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.338703 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.352017 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.399319 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.402643 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.467506 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.602974 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.708600 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.779678 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.896440 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.915050 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.949882 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 00:10:26 crc kubenswrapper[4786]: I0127 00:10:26.976610 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.133667 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.222089 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.254462 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.277392 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.293316 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.362739 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.365802 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.395307 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.602999 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.633118 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.736244 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.822927 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.924024 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.958916 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 00:10:27 crc kubenswrapper[4786]: I0127 00:10:27.998843 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.021814 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.027517 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.030087 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.076944 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.098103 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.198388 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.206400 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.212791 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.282168 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.298663 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.305120 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.440013 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.484618 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.497923 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.545165 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.692226 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.798297 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.815598 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 00:10:28 crc kubenswrapper[4786]: I0127 00:10:28.991819 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.115494 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.240853 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.309190 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.322800 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.331503 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.336172 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.369786 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.394359 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.446506 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.458018 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.458277 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.620675 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.667359 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.677272 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.707664 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.719466 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.721461 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.797359 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.803682 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 00:10:29 crc kubenswrapper[4786]: I0127 00:10:29.806155 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.086027 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.166764 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.178989 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.198457 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.272016 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.441917 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.496686 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.523529 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.663240 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.672979 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.697869 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.759728 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.809762 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.850704 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.913504 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.923919 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 00:10:30 crc kubenswrapper[4786]: I0127 00:10:30.943101 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.097753 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.114099 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.131131 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.177669 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.191932 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.229984 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.280885 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.337670 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.407540 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.667275 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.687089 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.724072 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.797463 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.821480 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.821793 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.830295 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 00:10:31 crc kubenswrapper[4786]: I0127 00:10:31.964880 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.039200 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.049132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.130401 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.133311 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.154979 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.223168 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.369871 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.419037 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.525021 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.534828 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.552843 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.579937 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.773299 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.800453 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.880715 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 00:10:32 crc kubenswrapper[4786]: I0127 00:10:32.975273 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.012424 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.098527 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.176924 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.227413 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.228465 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.247498 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.287084 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.302305 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.332312 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.396725 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.447659 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.455915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.482706 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.482925 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.485425 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.492878 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.542553 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.553747 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.587128 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.650225 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.652473 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.778481 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.822943 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 00:10:33 crc kubenswrapper[4786]: I0127 00:10:33.990634 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.003704 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.054302 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.101766 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.180970 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.213956 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.247408 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.333871 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.377127 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.449326 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.471154 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.572437 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.633994 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.640204 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.648961 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.697028 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.769489 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.821282 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.827268 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.836083 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.842842 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.857819 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.869341 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.971807 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.972134 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.972235 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.972730 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.973860 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7516e0531a168bdbe0cd2f9acd60840605fb40cfac016984bd8f24090a4d0c8e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 00:10:34 crc kubenswrapper[4786]: I0127 00:10:34.974306 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7516e0531a168bdbe0cd2f9acd60840605fb40cfac016984bd8f24090a4d0c8e" gracePeriod=30 Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.068403 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.120345 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.148289 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.149715 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.150680 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.234975 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.237269 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78lb5" podStartSLOduration=44.548162616 podStartE2EDuration="47.237252209s" podCreationTimestamp="2026-01-27 00:09:48 +0000 UTC" firstStartedPulling="2026-01-27 00:09:49.143559089 +0000 UTC m=+234.627246132" lastFinishedPulling="2026-01-27 00:09:51.832648682 +0000 UTC m=+237.316335725" observedRunningTime="2026-01-27 00:10:11.220133205 +0000 UTC m=+256.703820308" watchObservedRunningTime="2026-01-27 00:10:35.237252209 +0000 UTC m=+280.720939262" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.238098 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.238086761 podStartE2EDuration="45.238086761s" podCreationTimestamp="2026-01-27 00:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:10:11.356031783 +0000 UTC m=+256.839718836" watchObservedRunningTime="2026-01-27 00:10:35.238086761 +0000 UTC m=+280.721773824" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.239710 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrcmh" podStartSLOduration=43.789156287 podStartE2EDuration="46.239700575s" podCreationTimestamp="2026-01-27 00:09:49 +0000 UTC" firstStartedPulling="2026-01-27 00:09:50.161016991 +0000 UTC m=+235.644704024" lastFinishedPulling="2026-01-27 00:09:52.611561269 +0000 UTC m=+238.095248312" observedRunningTime="2026-01-27 00:10:11.247486794 +0000 UTC m=+256.731173877" watchObservedRunningTime="2026-01-27 00:10:35.239700575 +0000 UTC m=+280.723387628" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.240770 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.240824 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.241198 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.241226 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c04e7fb1-392d-4713-82ae-b82f94f1fc50" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.245046 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.279111 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.279095679 podStartE2EDuration="24.279095679s" podCreationTimestamp="2026-01-27 00:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:10:35.261764841 +0000 UTC m=+280.745451964" watchObservedRunningTime="2026-01-27 00:10:35.279095679 +0000 UTC m=+280.762782722" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.338800 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.350706 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.417282 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.634549 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.635754 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.692315 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.707830 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 00:10:35 crc kubenswrapper[4786]: I0127 00:10:35.973933 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.180557 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.259505 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.314691 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.332517 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.355363 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.356118 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.450984 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.536402 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.567775 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.734662 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.742188 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 00:10:36 crc kubenswrapper[4786]: I0127 00:10:36.881082 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 00:10:37 crc kubenswrapper[4786]: I0127 00:10:37.020158 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 00:10:37 crc kubenswrapper[4786]: I0127 00:10:37.320459 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 00:10:37 crc kubenswrapper[4786]: I0127 00:10:37.339078 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 00:10:38 crc kubenswrapper[4786]: I0127 00:10:38.402008 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 00:10:39 crc kubenswrapper[4786]: I0127 00:10:39.025542 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 00:10:39 crc kubenswrapper[4786]: I0127 00:10:39.780164 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 00:10:45 crc kubenswrapper[4786]: I0127 00:10:45.266219 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 00:10:45 crc kubenswrapper[4786]: I0127 00:10:45.266793 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7d717143c2199807ab2b2cc149bc478b89eba84d396d50f4812d1873e9d7671a" gracePeriod=5 Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.595864 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.596201 4786 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7d717143c2199807ab2b2cc149bc478b89eba84d396d50f4812d1873e9d7671a" exitCode=137 Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.868535 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.868632 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998152 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998217 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998275 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998311 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998371 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998372 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998423 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998511 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998612 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998831 4786 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998851 4786 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998862 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:10:50 crc kubenswrapper[4786]: I0127 00:10:50.998873 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.013095 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.099740 4786 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.158916 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.159487 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.176660 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.176726 4786 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e29e14a7-7444-417b-a061-a92caf05712c" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.206107 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.206212 4786 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e29e14a7-7444-417b-a061-a92caf05712c" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.606544 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.606623 4786 scope.go:117] "RemoveContainer" containerID="7d717143c2199807ab2b2cc149bc478b89eba84d396d50f4812d1873e9d7671a" Jan 27 00:10:51 crc kubenswrapper[4786]: I0127 00:10:51.606729 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 00:10:54 crc kubenswrapper[4786]: I0127 00:10:54.939997 4786 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 00:11:05 crc kubenswrapper[4786]: I0127 00:11:05.709598 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 00:11:05 crc kubenswrapper[4786]: I0127 00:11:05.712458 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 00:11:05 crc kubenswrapper[4786]: I0127 00:11:05.712589 4786 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7516e0531a168bdbe0cd2f9acd60840605fb40cfac016984bd8f24090a4d0c8e" exitCode=137 Jan 27 00:11:05 crc kubenswrapper[4786]: I0127 00:11:05.712641 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7516e0531a168bdbe0cd2f9acd60840605fb40cfac016984bd8f24090a4d0c8e"} Jan 27 00:11:05 crc kubenswrapper[4786]: I0127 00:11:05.712887 4786 scope.go:117] "RemoveContainer" containerID="7c5fd93b0329b7835de67cd84c411912ae092a4530033c75e601a81d24951fe5" Jan 27 00:11:06 crc kubenswrapper[4786]: I0127 00:11:06.722361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 00:11:06 crc kubenswrapper[4786]: I0127 00:11:06.724539 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"486cc670b502ac31c588dbc7164f4214ce46d02b4b54f0247bc5848d268b9bff"} Jan 27 00:11:08 crc kubenswrapper[4786]: I0127 00:11:08.483455 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:11:14 crc kubenswrapper[4786]: I0127 00:11:14.971207 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:11:14 crc kubenswrapper[4786]: I0127 00:11:14.976091 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:11:15 crc kubenswrapper[4786]: I0127 00:11:15.806785 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 00:11:20 crc kubenswrapper[4786]: I0127 00:11:20.344453 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:11:20 crc kubenswrapper[4786]: I0127 00:11:20.345108 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:11:50 crc kubenswrapper[4786]: I0127 00:11:50.345074 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:11:50 crc kubenswrapper[4786]: I0127 00:11:50.346816 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.677880 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g6n5j"] Jan 27 00:11:57 crc kubenswrapper[4786]: E0127 00:11:57.678657 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.678673 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 00:11:57 crc kubenswrapper[4786]: E0127 00:11:57.678700 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" containerName="installer" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.678709 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" containerName="installer" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.678828 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.678842 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c53e44-d237-4830-a5f2-f4f5fdc62320" containerName="installer" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.679208 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.706090 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g6n5j"] Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791821 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-trusted-ca\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-registry-tls\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791920 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-bound-sa-token\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791948 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2575177-bd4a-4328-88af-d46d24a22b61-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791976 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.791991 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-registry-certificates\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.792009 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8jk\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-kube-api-access-xc8jk\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.792048 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2575177-bd4a-4328-88af-d46d24a22b61-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.809732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.892876 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-bound-sa-token\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893145 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2575177-bd4a-4328-88af-d46d24a22b61-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-registry-certificates\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893362 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8jk\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-kube-api-access-xc8jk\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893459 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2575177-bd4a-4328-88af-d46d24a22b61-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893622 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-trusted-ca\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.893732 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-registry-tls\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.894029 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2575177-bd4a-4328-88af-d46d24a22b61-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.894660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-registry-certificates\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.895121 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2575177-bd4a-4328-88af-d46d24a22b61-trusted-ca\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.898283 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2575177-bd4a-4328-88af-d46d24a22b61-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.901532 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-registry-tls\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.911420 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8jk\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-kube-api-access-xc8jk\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:57 crc kubenswrapper[4786]: I0127 00:11:57.920073 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2575177-bd4a-4328-88af-d46d24a22b61-bound-sa-token\") pod \"image-registry-66df7c8f76-g6n5j\" (UID: \"a2575177-bd4a-4328-88af-d46d24a22b61\") " pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:58 crc kubenswrapper[4786]: I0127 00:11:57.999981 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:11:58 crc kubenswrapper[4786]: I0127 00:11:58.208018 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g6n5j"] Jan 27 00:11:59 crc kubenswrapper[4786]: I0127 00:11:59.033291 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" event={"ID":"a2575177-bd4a-4328-88af-d46d24a22b61","Type":"ContainerStarted","Data":"7106333d13759c9c0ca6b50ed6435ef1b6177f81a90b3ba15f3ab27e84fe4b6c"} Jan 27 00:11:59 crc kubenswrapper[4786]: I0127 00:11:59.033689 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" event={"ID":"a2575177-bd4a-4328-88af-d46d24a22b61","Type":"ContainerStarted","Data":"01d64fc17fe25baf5ef9e8163e3f5c42b576856de214150fc56bd6aeb20d0e33"} Jan 27 00:11:59 crc kubenswrapper[4786]: I0127 00:11:59.052666 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" podStartSLOduration=2.052647463 podStartE2EDuration="2.052647463s" podCreationTimestamp="2026-01-27 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:11:59.048247474 +0000 UTC m=+364.531934517" watchObservedRunningTime="2026-01-27 00:11:59.052647463 +0000 UTC m=+364.536334526" Jan 27 00:12:00 crc kubenswrapper[4786]: I0127 00:12:00.038114 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:12:18 crc kubenswrapper[4786]: I0127 00:12:18.007429 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g6n5j" Jan 27 00:12:18 crc kubenswrapper[4786]: I0127 00:12:18.066422 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:12:20 crc kubenswrapper[4786]: I0127 00:12:20.345100 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:12:20 crc kubenswrapper[4786]: I0127 00:12:20.345823 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:12:20 crc kubenswrapper[4786]: I0127 00:12:20.345890 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:12:20 crc kubenswrapper[4786]: I0127 00:12:20.346712 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:12:20 crc kubenswrapper[4786]: I0127 00:12:20.346801 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead" gracePeriod=600 Jan 27 00:12:21 crc kubenswrapper[4786]: I0127 00:12:21.155442 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead" exitCode=0 Jan 27 00:12:21 crc kubenswrapper[4786]: I0127 00:12:21.162041 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead"} Jan 27 00:12:21 crc kubenswrapper[4786]: I0127 00:12:21.162100 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37"} Jan 27 00:12:21 crc kubenswrapper[4786]: I0127 00:12:21.162130 4786 scope.go:117] "RemoveContainer" containerID="624de57b2498833c1b3d2b404d384b513ec5e5478108d50603652428f184cdf8" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.121892 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" podUID="c99c57e7-f694-4e94-aaf4-cefa5df36513" containerName="registry" containerID="cri-o://f6f634a3ce61db9d459be266de8c680acbdf90882639f99b4fcca7940d99b3fe" gracePeriod=30 Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.314646 4786 generic.go:334] "Generic (PLEG): container finished" podID="c99c57e7-f694-4e94-aaf4-cefa5df36513" containerID="f6f634a3ce61db9d459be266de8c680acbdf90882639f99b4fcca7940d99b3fe" exitCode=0 Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.314768 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" event={"ID":"c99c57e7-f694-4e94-aaf4-cefa5df36513","Type":"ContainerDied","Data":"f6f634a3ce61db9d459be266de8c680acbdf90882639f99b4fcca7940d99b3fe"} Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.608987 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652391 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652471 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652631 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652687 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx9fx\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652726 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652749 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652790 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.652816 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls\") pod \"c99c57e7-f694-4e94-aaf4-cefa5df36513\" (UID: \"c99c57e7-f694-4e94-aaf4-cefa5df36513\") " Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.654708 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.654856 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.660342 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.661858 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx" (OuterVolumeSpecName: "kube-api-access-zx9fx") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "kube-api-access-zx9fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.662384 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.666776 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.669454 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.677913 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c99c57e7-f694-4e94-aaf4-cefa5df36513" (UID: "c99c57e7-f694-4e94-aaf4-cefa5df36513"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755235 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c99c57e7-f694-4e94-aaf4-cefa5df36513-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755299 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755324 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx9fx\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-kube-api-access-zx9fx\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755344 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c99c57e7-f694-4e94-aaf4-cefa5df36513-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755427 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755462 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c99c57e7-f694-4e94-aaf4-cefa5df36513-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:43 crc kubenswrapper[4786]: I0127 00:12:43.755488 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c99c57e7-f694-4e94-aaf4-cefa5df36513-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 00:12:44 crc kubenswrapper[4786]: I0127 00:12:44.324434 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" event={"ID":"c99c57e7-f694-4e94-aaf4-cefa5df36513","Type":"ContainerDied","Data":"ff6865fe4438724d1f1a5b4926a61575ba63451db89b610b3bbbb91d341f5293"} Jan 27 00:12:44 crc kubenswrapper[4786]: I0127 00:12:44.324503 4786 scope.go:117] "RemoveContainer" containerID="f6f634a3ce61db9d459be266de8c680acbdf90882639f99b4fcca7940d99b3fe" Jan 27 00:12:44 crc kubenswrapper[4786]: I0127 00:12:44.324536 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc7zp" Jan 27 00:12:44 crc kubenswrapper[4786]: I0127 00:12:44.392388 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:12:44 crc kubenswrapper[4786]: I0127 00:12:44.400277 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc7zp"] Jan 27 00:12:45 crc kubenswrapper[4786]: I0127 00:12:45.160452 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99c57e7-f694-4e94-aaf4-cefa5df36513" path="/var/lib/kubelet/pods/c99c57e7-f694-4e94-aaf4-cefa5df36513/volumes" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.390103 4786 scope.go:117] "RemoveContainer" containerID="f0999e06ce42ebed8f6d62f9ceb20dccacf71c5539cd6c9fa7969f1bfa089e54" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.416424 4786 scope.go:117] "RemoveContainer" containerID="9aa3b327963e39dd96d82dd71a5e40afdd6eb735aab57330416dbc5f17edb965" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.441914 4786 scope.go:117] "RemoveContainer" containerID="41722bb4bfd4082c75d66c0dcafddf1364c8be7c3e8838df9cf46ce249648ec0" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.463531 4786 scope.go:117] "RemoveContainer" containerID="e6b71886f02fdfded20495757a559fab44ea759781206ef1b381fe4d117bd89b" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.486392 4786 scope.go:117] "RemoveContainer" containerID="7dfa6033aed4731dd44025b8f31aab69c0c0cba18f2f6fc4e788d6b97dfdabd1" Jan 27 00:12:55 crc kubenswrapper[4786]: I0127 00:12:55.503732 4786 scope.go:117] "RemoveContainer" containerID="32154f4ac7b09f4f0a8f48ee842ba3bef1469237926908b469967760ddb5d58c" Jan 27 00:14:20 crc kubenswrapper[4786]: I0127 00:14:20.344430 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:14:20 crc kubenswrapper[4786]: I0127 00:14:20.345079 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.430712 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fqh9p"] Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431651 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-controller" containerID="cri-o://dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431678 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="sbdb" containerID="cri-o://8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431784 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="nbdb" containerID="cri-o://cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431851 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="northd" containerID="cri-o://552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431891 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431934 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-node" containerID="cri-o://3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.431973 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-acl-logging" containerID="cri-o://be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.466976 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" containerID="cri-o://1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" gracePeriod=30 Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.785433 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/3.log" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.789455 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovn-acl-logging/0.log" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.790098 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovn-controller/0.log" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.790660 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841469 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cnpdp"] Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841686 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841701 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841711 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841716 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841724 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kubecfg-setup" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841730 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kubecfg-setup" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841738 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841743 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841751 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="sbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841757 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="sbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841763 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="nbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841769 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="nbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841779 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-acl-logging" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841785 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-acl-logging" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841798 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99c57e7-f694-4e94-aaf4-cefa5df36513" containerName="registry" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841803 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99c57e7-f694-4e94-aaf4-cefa5df36513" containerName="registry" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841811 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-node" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841816 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-node" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841824 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="northd" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841830 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="northd" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841838 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841844 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.841851 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841857 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841931 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841939 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841947 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-node" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841956 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841966 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841973 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99c57e7-f694-4e94-aaf4-cefa5df36513" containerName="registry" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841981 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="nbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841987 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-acl-logging" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.841994 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="sbdb" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842002 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="northd" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842010 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovn-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.842090 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842098 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: E0127 00:14:46.842106 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842138 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842223 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.842384 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerName="ovnkube-controller" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.843815 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865620 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865661 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865675 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865697 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865716 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865746 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865761 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865778 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865807 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865827 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865867 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zbcd\" (UniqueName: \"kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865882 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865904 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865918 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865933 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865950 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865969 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.865987 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866036 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units\") pod \"629f8cf2-3b6f-404b-814f-1e613f80e63e\" (UID: \"629f8cf2-3b6f-404b-814f-1e613f80e63e\") " Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866145 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866166 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-bin\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866184 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-netns\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866209 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zms2q\" (UniqueName: \"kubernetes.io/projected/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-kube-api-access-zms2q\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866258 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-env-overrides\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866280 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-ovn\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866303 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-etc-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866341 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-var-lib-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-netd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866390 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-systemd-units\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866404 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-node-log\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866422 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovn-node-metrics-cert\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-kubelet\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866452 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-config\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866473 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-script-lib\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-log-socket\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-systemd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866516 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-slash\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866530 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866592 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866669 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866694 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866729 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.866744 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867151 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867479 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867514 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867532 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867835 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867858 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867875 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867899 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket" (OuterVolumeSpecName: "log-socket") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867903 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867920 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867932 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash" (OuterVolumeSpecName: "host-slash") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.867950 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log" (OuterVolumeSpecName: "node-log") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.872648 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd" (OuterVolumeSpecName: "kube-api-access-6zbcd") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "kube-api-access-6zbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.873461 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.887967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "629f8cf2-3b6f-404b-814f-1e613f80e63e" (UID: "629f8cf2-3b6f-404b-814f-1e613f80e63e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970155 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-netd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-systemd-units\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970225 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-node-log\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970248 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovn-node-metrics-cert\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970266 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-kubelet\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-config\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970311 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-script-lib\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970327 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-systemd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970341 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-log-socket\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-slash\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970397 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970388 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-netd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970421 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970440 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-bin\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970460 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-netns\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970464 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-kubelet\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970463 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970508 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-systemd\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-log-socket\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970541 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-cni-bin\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970408 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-node-log\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970545 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970562 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-netns\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970603 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-slash\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970631 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970739 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zms2q\" (UniqueName: \"kubernetes.io/projected/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-kube-api-access-zms2q\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970923 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-ovn\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970965 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-env-overrides\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971004 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-etc-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971027 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-var-lib-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971090 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971110 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971130 4786 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971149 4786 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971164 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971176 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zbcd\" (UniqueName: \"kubernetes.io/projected/629f8cf2-3b6f-404b-814f-1e613f80e63e-kube-api-access-6zbcd\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971182 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-script-lib\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971189 4786 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971220 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-var-lib-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971321 4786 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971342 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-etc-openvswitch\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971358 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovnkube-config\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971374 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971362 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-run-ovn\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971395 4786 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971412 4786 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971428 4786 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971443 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971455 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/629f8cf2-3b6f-404b-814f-1e613f80e63e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971467 4786 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971479 4786 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971495 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971508 4786 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971519 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971531 4786 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/629f8cf2-3b6f-404b-814f-1e613f80e63e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.971906 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-env-overrides\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.970328 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-systemd-units\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.978293 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-ovn-node-metrics-cert\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:46 crc kubenswrapper[4786]: I0127 00:14:46.985588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zms2q\" (UniqueName: \"kubernetes.io/projected/788a3385-b7c6-4ac5-ab2b-6918c4ce0da9-kube-api-access-zms2q\") pod \"ovnkube-node-cnpdp\" (UID: \"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.109014 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovnkube-controller/3.log" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.112630 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovn-acl-logging/0.log" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.113461 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fqh9p_629f8cf2-3b6f-404b-814f-1e613f80e63e/ovn-controller/0.log" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114027 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114063 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114082 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114094 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114105 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114117 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" exitCode=0 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114129 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" exitCode=143 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114169 4786 generic.go:334] "Generic (PLEG): container finished" podID="629f8cf2-3b6f-404b-814f-1e613f80e63e" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" exitCode=143 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114118 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114125 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114391 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114426 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114422 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114453 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114507 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114532 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114554 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114599 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114618 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114633 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114648 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114662 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114676 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114690 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114714 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114741 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114760 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114776 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114791 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114806 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114820 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114834 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114848 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114862 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114879 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114901 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114925 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114944 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114958 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114973 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.114987 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115003 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115020 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115035 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115050 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115065 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115091 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fqh9p" event={"ID":"629f8cf2-3b6f-404b-814f-1e613f80e63e","Type":"ContainerDied","Data":"53c9b258c899a3126cd3056c657af284d6115e0d7a735e291b783da7daa2f830"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115116 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115134 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115149 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115165 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115181 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115195 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115210 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115224 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115238 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.115252 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.116616 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/2.log" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.117465 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/1.log" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.117526 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d790bab-fb2b-4745-a195-65359a962f52" containerID="dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf" exitCode=2 Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.117563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerDied","Data":"dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.117628 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a"} Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.118075 4786 scope.go:117] "RemoveContainer" containerID="dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.118411 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-phvd5_openshift-multus(8d790bab-fb2b-4745-a195-65359a962f52)\"" pod="openshift-multus/multus-phvd5" podUID="8d790bab-fb2b-4745-a195-65359a962f52" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.142478 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.159880 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.165907 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fqh9p"] Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.167759 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fqh9p"] Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.173378 4786 scope.go:117] "RemoveContainer" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.187652 4786 scope.go:117] "RemoveContainer" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.214219 4786 scope.go:117] "RemoveContainer" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.226711 4786 scope.go:117] "RemoveContainer" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.238934 4786 scope.go:117] "RemoveContainer" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.257773 4786 scope.go:117] "RemoveContainer" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.289717 4786 scope.go:117] "RemoveContainer" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.302978 4786 scope.go:117] "RemoveContainer" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.317696 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.317993 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318022 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} err="failed to get container status \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318043 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.318509 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": container with ID starting with d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185 not found: ID does not exist" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318531 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} err="failed to get container status \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": rpc error: code = NotFound desc = could not find container \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": container with ID starting with d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318543 4786 scope.go:117] "RemoveContainer" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.318746 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": container with ID starting with 8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f not found: ID does not exist" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318762 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} err="failed to get container status \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": rpc error: code = NotFound desc = could not find container \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": container with ID starting with 8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.318776 4786 scope.go:117] "RemoveContainer" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.319011 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": container with ID starting with cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630 not found: ID does not exist" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319031 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} err="failed to get container status \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": rpc error: code = NotFound desc = could not find container \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": container with ID starting with cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319043 4786 scope.go:117] "RemoveContainer" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.319220 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": container with ID starting with 552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c not found: ID does not exist" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319240 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} err="failed to get container status \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": rpc error: code = NotFound desc = could not find container \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": container with ID starting with 552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319252 4786 scope.go:117] "RemoveContainer" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.319454 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": container with ID starting with 5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2 not found: ID does not exist" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319474 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} err="failed to get container status \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": rpc error: code = NotFound desc = could not find container \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": container with ID starting with 5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319488 4786 scope.go:117] "RemoveContainer" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.319728 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": container with ID starting with 3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95 not found: ID does not exist" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319779 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} err="failed to get container status \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": rpc error: code = NotFound desc = could not find container \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": container with ID starting with 3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319795 4786 scope.go:117] "RemoveContainer" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.319965 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": container with ID starting with be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a not found: ID does not exist" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319984 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} err="failed to get container status \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": rpc error: code = NotFound desc = could not find container \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": container with ID starting with be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.319996 4786 scope.go:117] "RemoveContainer" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.320186 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": container with ID starting with dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938 not found: ID does not exist" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.320208 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} err="failed to get container status \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": rpc error: code = NotFound desc = could not find container \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": container with ID starting with dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.320220 4786 scope.go:117] "RemoveContainer" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: E0127 00:14:47.321756 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": container with ID starting with 8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523 not found: ID does not exist" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.321801 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} err="failed to get container status \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": rpc error: code = NotFound desc = could not find container \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": container with ID starting with 8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.321829 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.322128 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} err="failed to get container status \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.322172 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.322416 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} err="failed to get container status \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": rpc error: code = NotFound desc = could not find container \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": container with ID starting with d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.322441 4786 scope.go:117] "RemoveContainer" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.323220 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} err="failed to get container status \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": rpc error: code = NotFound desc = could not find container \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": container with ID starting with 8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.323254 4786 scope.go:117] "RemoveContainer" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.323758 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} err="failed to get container status \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": rpc error: code = NotFound desc = could not find container \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": container with ID starting with cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.323785 4786 scope.go:117] "RemoveContainer" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324000 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} err="failed to get container status \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": rpc error: code = NotFound desc = could not find container \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": container with ID starting with 552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324021 4786 scope.go:117] "RemoveContainer" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324208 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} err="failed to get container status \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": rpc error: code = NotFound desc = could not find container \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": container with ID starting with 5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324232 4786 scope.go:117] "RemoveContainer" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324506 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} err="failed to get container status \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": rpc error: code = NotFound desc = could not find container \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": container with ID starting with 3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324532 4786 scope.go:117] "RemoveContainer" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324765 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} err="failed to get container status \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": rpc error: code = NotFound desc = could not find container \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": container with ID starting with be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.324788 4786 scope.go:117] "RemoveContainer" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325437 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} err="failed to get container status \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": rpc error: code = NotFound desc = could not find container \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": container with ID starting with dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325460 4786 scope.go:117] "RemoveContainer" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325711 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} err="failed to get container status \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": rpc error: code = NotFound desc = could not find container \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": container with ID starting with 8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325728 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325926 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} err="failed to get container status \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.325943 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326231 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} err="failed to get container status \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": rpc error: code = NotFound desc = could not find container \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": container with ID starting with d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326256 4786 scope.go:117] "RemoveContainer" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326517 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} err="failed to get container status \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": rpc error: code = NotFound desc = could not find container \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": container with ID starting with 8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326536 4786 scope.go:117] "RemoveContainer" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326784 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} err="failed to get container status \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": rpc error: code = NotFound desc = could not find container \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": container with ID starting with cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.326804 4786 scope.go:117] "RemoveContainer" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327210 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} err="failed to get container status \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": rpc error: code = NotFound desc = could not find container \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": container with ID starting with 552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327231 4786 scope.go:117] "RemoveContainer" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327496 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} err="failed to get container status \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": rpc error: code = NotFound desc = could not find container \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": container with ID starting with 5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327532 4786 scope.go:117] "RemoveContainer" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327818 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} err="failed to get container status \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": rpc error: code = NotFound desc = could not find container \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": container with ID starting with 3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.327847 4786 scope.go:117] "RemoveContainer" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328260 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} err="failed to get container status \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": rpc error: code = NotFound desc = could not find container \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": container with ID starting with be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328280 4786 scope.go:117] "RemoveContainer" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328520 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} err="failed to get container status \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": rpc error: code = NotFound desc = could not find container \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": container with ID starting with dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328536 4786 scope.go:117] "RemoveContainer" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328772 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} err="failed to get container status \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": rpc error: code = NotFound desc = could not find container \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": container with ID starting with 8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.328796 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.329846 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} err="failed to get container status \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.329890 4786 scope.go:117] "RemoveContainer" containerID="d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330199 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185"} err="failed to get container status \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": rpc error: code = NotFound desc = could not find container \"d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185\": container with ID starting with d089e369192ab7efd99f05b3e66f920e35eea966ba9bf1e22e8ef6c2dbbe8185 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330224 4786 scope.go:117] "RemoveContainer" containerID="8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330465 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f"} err="failed to get container status \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": rpc error: code = NotFound desc = could not find container \"8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f\": container with ID starting with 8dbed95bc6d9e82ff85f69029a94a91ec05103fca686a47a63e02d0512b1929f not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330485 4786 scope.go:117] "RemoveContainer" containerID="cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330828 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630"} err="failed to get container status \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": rpc error: code = NotFound desc = could not find container \"cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630\": container with ID starting with cc30c236180aac06ca3830f4e16417e43dd06d16b23c0c417e98d027fe24a630 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.330856 4786 scope.go:117] "RemoveContainer" containerID="552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331108 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c"} err="failed to get container status \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": rpc error: code = NotFound desc = could not find container \"552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c\": container with ID starting with 552f2a2540f4ccf74d98ca899477ec85c6414619c14734c73b20ae0053d7301c not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331135 4786 scope.go:117] "RemoveContainer" containerID="5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331412 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2"} err="failed to get container status \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": rpc error: code = NotFound desc = could not find container \"5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2\": container with ID starting with 5ac69de8f4278ebaa6590bfd351c8fd10f9b4d2fb16f47042c5321888a2e44f2 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331437 4786 scope.go:117] "RemoveContainer" containerID="3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331696 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95"} err="failed to get container status \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": rpc error: code = NotFound desc = could not find container \"3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95\": container with ID starting with 3e24a48525a1bf9ae15077140f8518d52a02d503308d729c380ef0a514ebed95 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331716 4786 scope.go:117] "RemoveContainer" containerID="be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331963 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a"} err="failed to get container status \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": rpc error: code = NotFound desc = could not find container \"be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a\": container with ID starting with be43169f39e0c9a0f97988f28448ac5464952f9d4eafd175b7eb9d8fb1821e6a not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.331979 4786 scope.go:117] "RemoveContainer" containerID="dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.332216 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938"} err="failed to get container status \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": rpc error: code = NotFound desc = could not find container \"dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938\": container with ID starting with dc2894bde157ef4e9b8d8a0cc623c0d8b2a6b6950f73eb5cc9450a41d755b938 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.332233 4786 scope.go:117] "RemoveContainer" containerID="8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.332528 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523"} err="failed to get container status \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": rpc error: code = NotFound desc = could not find container \"8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523\": container with ID starting with 8be8db0555f009d0dc76d177334bad82dce7426755eb3f1875300c7e0dae7523 not found: ID does not exist" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.332544 4786 scope.go:117] "RemoveContainer" containerID="1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1" Jan 27 00:14:47 crc kubenswrapper[4786]: I0127 00:14:47.332768 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1"} err="failed to get container status \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": rpc error: code = NotFound desc = could not find container \"1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1\": container with ID starting with 1ba9c66f1caff34afead35267e9bf1840a7d6a2b953f5ab57840c84a1ba741d1 not found: ID does not exist" Jan 27 00:14:48 crc kubenswrapper[4786]: I0127 00:14:48.133025 4786 generic.go:334] "Generic (PLEG): container finished" podID="788a3385-b7c6-4ac5-ab2b-6918c4ce0da9" containerID="7f78a4916704330693095d7f676e1c8ad60f85e67fe13f3ec66648c5a43cab3a" exitCode=0 Jan 27 00:14:48 crc kubenswrapper[4786]: I0127 00:14:48.133106 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerDied","Data":"7f78a4916704330693095d7f676e1c8ad60f85e67fe13f3ec66648c5a43cab3a"} Jan 27 00:14:48 crc kubenswrapper[4786]: I0127 00:14:48.134845 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"31e7825f2c94f6d2bdf95719737bc12ba3b26a6c5a2bd76f117cd43468bc0080"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.144723 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"fd5b9eeffffd8f642c70979de26bbbc1d23c9ecab8a4703850f213b883e1247e"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.145099 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"e920e3bc30e4b010ebdb1d6d264ff808d954a5b45c8e5fe0f2afbf8fea471bab"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.145115 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"cc4f6d9ac04dd481dddbdef2a7e96538f9e954b8db8c7587abaf7397c38c2785"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.145129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"d7b4457609968dda0c0a0cd22911a16f356765ac43f82a7390802eeee56cd79f"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.145141 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"b1db63c51823cb9d039eb1b4edb8574a33ab137df6b86281f35a690e64dd92bc"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.145155 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"c9b2cc0af647f67425228094f284c8ae6947ddcfa01109d7d3d4c468aa625307"} Jan 27 00:14:49 crc kubenswrapper[4786]: I0127 00:14:49.154956 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629f8cf2-3b6f-404b-814f-1e613f80e63e" path="/var/lib/kubelet/pods/629f8cf2-3b6f-404b-814f-1e613f80e63e/volumes" Jan 27 00:14:50 crc kubenswrapper[4786]: I0127 00:14:50.345252 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:14:50 crc kubenswrapper[4786]: I0127 00:14:50.345355 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:14:51 crc kubenswrapper[4786]: I0127 00:14:51.160005 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"5ada131c3b6721614462849e4564f0f157d2d64f29e0637ed02dc295da537312"} Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.177894 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" event={"ID":"788a3385-b7c6-4ac5-ab2b-6918c4ce0da9","Type":"ContainerStarted","Data":"3986910010a845b28c9b1778772f497be69364fea71c2a193d25a0cfe8a7a68f"} Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.178469 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.178484 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.178493 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.204912 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.205224 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:14:54 crc kubenswrapper[4786]: I0127 00:14:54.207638 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" podStartSLOduration=8.207620466 podStartE2EDuration="8.207620466s" podCreationTimestamp="2026-01-27 00:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:14:54.203767353 +0000 UTC m=+539.687454406" watchObservedRunningTime="2026-01-27 00:14:54.207620466 +0000 UTC m=+539.691307509" Jan 27 00:14:55 crc kubenswrapper[4786]: I0127 00:14:55.578859 4786 scope.go:117] "RemoveContainer" containerID="a9be016420c75c4feaf9735cd09c45d0be237192f03124c50731e1de9a9e375a" Jan 27 00:14:56 crc kubenswrapper[4786]: I0127 00:14:56.189101 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/2.log" Jan 27 00:14:58 crc kubenswrapper[4786]: I0127 00:14:58.147817 4786 scope.go:117] "RemoveContainer" containerID="dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf" Jan 27 00:14:58 crc kubenswrapper[4786]: E0127 00:14:58.148638 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-phvd5_openshift-multus(8d790bab-fb2b-4745-a195-65359a962f52)\"" pod="openshift-multus/multus-phvd5" podUID="8d790bab-fb2b-4745-a195-65359a962f52" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.185538 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r"] Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.186739 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.190318 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.190602 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.194917 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r"] Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.345143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.345235 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.345260 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrwn\" (UniqueName: \"kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.446785 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.447026 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whrwn\" (UniqueName: \"kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.447152 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.447726 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.454112 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.464980 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whrwn\" (UniqueName: \"kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn\") pod \"collect-profiles-29491215-58q5r\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: I0127 00:15:00.506897 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: E0127 00:15:00.539414 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(671e2cace290c42b060177bd2f3a8bf9eaa98122beaef235e5680a6cfe15a6d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 00:15:00 crc kubenswrapper[4786]: E0127 00:15:00.539527 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(671e2cace290c42b060177bd2f3a8bf9eaa98122beaef235e5680a6cfe15a6d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: E0127 00:15:00.539555 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(671e2cace290c42b060177bd2f3a8bf9eaa98122beaef235e5680a6cfe15a6d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:00 crc kubenswrapper[4786]: E0127 00:15:00.539682 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager(d484238f-4bce-44a3-b4ec-ad1aa0166ae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager(d484238f-4bce-44a3-b4ec-ad1aa0166ae2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(671e2cace290c42b060177bd2f3a8bf9eaa98122beaef235e5680a6cfe15a6d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" podUID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" Jan 27 00:15:01 crc kubenswrapper[4786]: I0127 00:15:01.221193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:01 crc kubenswrapper[4786]: I0127 00:15:01.221738 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:01 crc kubenswrapper[4786]: E0127 00:15:01.244211 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(f238a55b841e30e1529c43ec86fe4e5423f3c8565bcdfeeec04ce9077bc91667): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 00:15:01 crc kubenswrapper[4786]: E0127 00:15:01.244270 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(f238a55b841e30e1529c43ec86fe4e5423f3c8565bcdfeeec04ce9077bc91667): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:01 crc kubenswrapper[4786]: E0127 00:15:01.244290 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(f238a55b841e30e1529c43ec86fe4e5423f3c8565bcdfeeec04ce9077bc91667): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:01 crc kubenswrapper[4786]: E0127 00:15:01.244335 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager(d484238f-4bce-44a3-b4ec-ad1aa0166ae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager(d484238f-4bce-44a3-b4ec-ad1aa0166ae2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29491215-58q5r_openshift-operator-lifecycle-manager_d484238f-4bce-44a3-b4ec-ad1aa0166ae2_0(f238a55b841e30e1529c43ec86fe4e5423f3c8565bcdfeeec04ce9077bc91667): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" podUID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" Jan 27 00:15:10 crc kubenswrapper[4786]: I0127 00:15:10.147951 4786 scope.go:117] "RemoveContainer" containerID="dafa35d165ff0d11e549cafa4254271802d1389371075de24520398e7b4714bf" Jan 27 00:15:11 crc kubenswrapper[4786]: I0127 00:15:11.279505 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-phvd5_8d790bab-fb2b-4745-a195-65359a962f52/kube-multus/2.log" Jan 27 00:15:11 crc kubenswrapper[4786]: I0127 00:15:11.279806 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-phvd5" event={"ID":"8d790bab-fb2b-4745-a195-65359a962f52","Type":"ContainerStarted","Data":"26c6c230cc8729fc9b6fa0e81e394d11d022469a9f2fe25b3f917a300980b6f1"} Jan 27 00:15:14 crc kubenswrapper[4786]: I0127 00:15:14.146745 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:14 crc kubenswrapper[4786]: I0127 00:15:14.154609 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:14 crc kubenswrapper[4786]: I0127 00:15:14.326964 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r"] Jan 27 00:15:15 crc kubenswrapper[4786]: I0127 00:15:15.308766 4786 generic.go:334] "Generic (PLEG): container finished" podID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" containerID="46fbb125fd24b4bb3387cfd20d360ba5fd8a6f19aa2c423a13ce98af6453a063" exitCode=0 Jan 27 00:15:15 crc kubenswrapper[4786]: I0127 00:15:15.308828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" event={"ID":"d484238f-4bce-44a3-b4ec-ad1aa0166ae2","Type":"ContainerDied","Data":"46fbb125fd24b4bb3387cfd20d360ba5fd8a6f19aa2c423a13ce98af6453a063"} Jan 27 00:15:15 crc kubenswrapper[4786]: I0127 00:15:15.308892 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" event={"ID":"d484238f-4bce-44a3-b4ec-ad1aa0166ae2","Type":"ContainerStarted","Data":"ab74a33d22ad53830cfd373c0039a60c8a2067c824d70b1b6d0ff82f7288bed1"} Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.620738 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.755619 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume\") pod \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.755708 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whrwn\" (UniqueName: \"kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn\") pod \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.755821 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume\") pod \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\" (UID: \"d484238f-4bce-44a3-b4ec-ad1aa0166ae2\") " Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.756710 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume" (OuterVolumeSpecName: "config-volume") pod "d484238f-4bce-44a3-b4ec-ad1aa0166ae2" (UID: "d484238f-4bce-44a3-b4ec-ad1aa0166ae2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.761560 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn" (OuterVolumeSpecName: "kube-api-access-whrwn") pod "d484238f-4bce-44a3-b4ec-ad1aa0166ae2" (UID: "d484238f-4bce-44a3-b4ec-ad1aa0166ae2"). InnerVolumeSpecName "kube-api-access-whrwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.767775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d484238f-4bce-44a3-b4ec-ad1aa0166ae2" (UID: "d484238f-4bce-44a3-b4ec-ad1aa0166ae2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.858973 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.859063 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whrwn\" (UniqueName: \"kubernetes.io/projected/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-kube-api-access-whrwn\") on node \"crc\" DevicePath \"\"" Jan 27 00:15:16 crc kubenswrapper[4786]: I0127 00:15:16.859101 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d484238f-4bce-44a3-b4ec-ad1aa0166ae2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 00:15:17 crc kubenswrapper[4786]: I0127 00:15:17.195860 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnpdp" Jan 27 00:15:17 crc kubenswrapper[4786]: I0127 00:15:17.323139 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" event={"ID":"d484238f-4bce-44a3-b4ec-ad1aa0166ae2","Type":"ContainerDied","Data":"ab74a33d22ad53830cfd373c0039a60c8a2067c824d70b1b6d0ff82f7288bed1"} Jan 27 00:15:17 crc kubenswrapper[4786]: I0127 00:15:17.323196 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab74a33d22ad53830cfd373c0039a60c8a2067c824d70b1b6d0ff82f7288bed1" Jan 27 00:15:17 crc kubenswrapper[4786]: I0127 00:15:17.323221 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491215-58q5r" Jan 27 00:15:20 crc kubenswrapper[4786]: I0127 00:15:20.345043 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:15:20 crc kubenswrapper[4786]: I0127 00:15:20.345370 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:15:20 crc kubenswrapper[4786]: I0127 00:15:20.345416 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:15:20 crc kubenswrapper[4786]: I0127 00:15:20.346062 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:15:20 crc kubenswrapper[4786]: I0127 00:15:20.346130 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37" gracePeriod=600 Jan 27 00:15:21 crc kubenswrapper[4786]: I0127 00:15:21.347226 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37" exitCode=0 Jan 27 00:15:21 crc kubenswrapper[4786]: I0127 00:15:21.347297 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37"} Jan 27 00:15:21 crc kubenswrapper[4786]: I0127 00:15:21.347956 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c"} Jan 27 00:15:21 crc kubenswrapper[4786]: I0127 00:15:21.347985 4786 scope.go:117] "RemoveContainer" containerID="63abc102b8fa89f4332336660666c98a15247c02eaa4c6d535ed38e393ca6ead" Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.488422 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.489605 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7dqt" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="registry-server" containerID="cri-o://45c6ca1de6fd233a8f07ef580da48bb87961c7a6e0703b528b41290f33c4a4a6" gracePeriod=30 Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.638750 4786 generic.go:334] "Generic (PLEG): container finished" podID="17299418-877c-4c6d-9473-2bbb4319ac07" containerID="45c6ca1de6fd233a8f07ef580da48bb87961c7a6e0703b528b41290f33c4a4a6" exitCode=0 Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.638817 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerDied","Data":"45c6ca1de6fd233a8f07ef580da48bb87961c7a6e0703b528b41290f33c4a4a6"} Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.813188 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.962391 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities\") pod \"17299418-877c-4c6d-9473-2bbb4319ac07\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.962502 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content\") pod \"17299418-877c-4c6d-9473-2bbb4319ac07\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.962605 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wtgk\" (UniqueName: \"kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk\") pod \"17299418-877c-4c6d-9473-2bbb4319ac07\" (UID: \"17299418-877c-4c6d-9473-2bbb4319ac07\") " Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.966732 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities" (OuterVolumeSpecName: "utilities") pod "17299418-877c-4c6d-9473-2bbb4319ac07" (UID: "17299418-877c-4c6d-9473-2bbb4319ac07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:05 crc kubenswrapper[4786]: I0127 00:16:05.972145 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk" (OuterVolumeSpecName: "kube-api-access-8wtgk") pod "17299418-877c-4c6d-9473-2bbb4319ac07" (UID: "17299418-877c-4c6d-9473-2bbb4319ac07"). InnerVolumeSpecName "kube-api-access-8wtgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.004752 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17299418-877c-4c6d-9473-2bbb4319ac07" (UID: "17299418-877c-4c6d-9473-2bbb4319ac07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.063816 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wtgk\" (UniqueName: \"kubernetes.io/projected/17299418-877c-4c6d-9473-2bbb4319ac07-kube-api-access-8wtgk\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.063873 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.063895 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17299418-877c-4c6d-9473-2bbb4319ac07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.646280 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7dqt" event={"ID":"17299418-877c-4c6d-9473-2bbb4319ac07","Type":"ContainerDied","Data":"be58f3e4fcff93c0da1de8c8da6880b45928fdaed0a4e8001b77f91e0f435a2b"} Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.646315 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7dqt" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.646679 4786 scope.go:117] "RemoveContainer" containerID="45c6ca1de6fd233a8f07ef580da48bb87961c7a6e0703b528b41290f33c4a4a6" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.666693 4786 scope.go:117] "RemoveContainer" containerID="0ae7c4f36f192b0b9cc217e75d51628a75028608334977351779ad82c71bd706" Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.674174 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.677831 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7dqt"] Jan 27 00:16:06 crc kubenswrapper[4786]: I0127 00:16:06.688931 4786 scope.go:117] "RemoveContainer" containerID="878d04614e159ec2833d9ef0ed0aa2c25b719c9dd086c3e8f40228a91f378117" Jan 27 00:16:07 crc kubenswrapper[4786]: I0127 00:16:07.169002 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" path="/var/lib/kubelet/pods/17299418-877c-4c6d-9473-2bbb4319ac07/volumes" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.423422 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8"] Jan 27 00:16:09 crc kubenswrapper[4786]: E0127 00:16:09.424028 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="registry-server" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424048 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="registry-server" Jan 27 00:16:09 crc kubenswrapper[4786]: E0127 00:16:09.424071 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" containerName="collect-profiles" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424081 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" containerName="collect-profiles" Jan 27 00:16:09 crc kubenswrapper[4786]: E0127 00:16:09.424096 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="extract-utilities" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424107 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="extract-utilities" Jan 27 00:16:09 crc kubenswrapper[4786]: E0127 00:16:09.424122 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="extract-content" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424131 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="extract-content" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424273 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d484238f-4bce-44a3-b4ec-ad1aa0166ae2" containerName="collect-profiles" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.424303 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="17299418-877c-4c6d-9473-2bbb4319ac07" containerName="registry-server" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.425395 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.428287 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.440060 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8"] Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.513849 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqrz\" (UniqueName: \"kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.513923 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.513944 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.615651 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xqrz\" (UniqueName: \"kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.615707 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.615730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.616202 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.616281 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.643472 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xqrz\" (UniqueName: \"kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:09 crc kubenswrapper[4786]: I0127 00:16:09.749231 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:10 crc kubenswrapper[4786]: I0127 00:16:10.026024 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8"] Jan 27 00:16:10 crc kubenswrapper[4786]: I0127 00:16:10.682930 4786 generic.go:334] "Generic (PLEG): container finished" podID="dbb96edc-2ef9-4042-9392-a4172d877678" containerID="932d6ed4ea6a32be586920314b5f854583e4015570a4fefe7f193af10c455700" exitCode=0 Jan 27 00:16:10 crc kubenswrapper[4786]: I0127 00:16:10.683037 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerDied","Data":"932d6ed4ea6a32be586920314b5f854583e4015570a4fefe7f193af10c455700"} Jan 27 00:16:10 crc kubenswrapper[4786]: I0127 00:16:10.683293 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerStarted","Data":"b6709d3a91f11da5234ff3738243976400d1ef8499d8a11748721206ab590bb1"} Jan 27 00:16:10 crc kubenswrapper[4786]: I0127 00:16:10.687077 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 00:16:11 crc kubenswrapper[4786]: I0127 00:16:11.696115 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerStarted","Data":"6b3ae3a5236a8e0d4fa5b1613b7bcd0fe8f26c2a901461135d9d125560dca09d"} Jan 27 00:16:12 crc kubenswrapper[4786]: I0127 00:16:12.706885 4786 generic.go:334] "Generic (PLEG): container finished" podID="dbb96edc-2ef9-4042-9392-a4172d877678" containerID="6b3ae3a5236a8e0d4fa5b1613b7bcd0fe8f26c2a901461135d9d125560dca09d" exitCode=0 Jan 27 00:16:12 crc kubenswrapper[4786]: I0127 00:16:12.706947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerDied","Data":"6b3ae3a5236a8e0d4fa5b1613b7bcd0fe8f26c2a901461135d9d125560dca09d"} Jan 27 00:16:13 crc kubenswrapper[4786]: I0127 00:16:13.717757 4786 generic.go:334] "Generic (PLEG): container finished" podID="dbb96edc-2ef9-4042-9392-a4172d877678" containerID="399b9c10a39674e3f802dbabe903a54d6d7172123be09c2a31e98dc3ce2ad58e" exitCode=0 Jan 27 00:16:13 crc kubenswrapper[4786]: I0127 00:16:13.717967 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerDied","Data":"399b9c10a39674e3f802dbabe903a54d6d7172123be09c2a31e98dc3ce2ad58e"} Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.018840 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.204410 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle\") pod \"dbb96edc-2ef9-4042-9392-a4172d877678\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.204513 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xqrz\" (UniqueName: \"kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz\") pod \"dbb96edc-2ef9-4042-9392-a4172d877678\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.204595 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util\") pod \"dbb96edc-2ef9-4042-9392-a4172d877678\" (UID: \"dbb96edc-2ef9-4042-9392-a4172d877678\") " Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.206996 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle" (OuterVolumeSpecName: "bundle") pod "dbb96edc-2ef9-4042-9392-a4172d877678" (UID: "dbb96edc-2ef9-4042-9392-a4172d877678"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.215743 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz" (OuterVolumeSpecName: "kube-api-access-5xqrz") pod "dbb96edc-2ef9-4042-9392-a4172d877678" (UID: "dbb96edc-2ef9-4042-9392-a4172d877678"). InnerVolumeSpecName "kube-api-access-5xqrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.229174 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util" (OuterVolumeSpecName: "util") pod "dbb96edc-2ef9-4042-9392-a4172d877678" (UID: "dbb96edc-2ef9-4042-9392-a4172d877678"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.306339 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-util\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.307097 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbb96edc-2ef9-4042-9392-a4172d877678-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.307195 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xqrz\" (UniqueName: \"kubernetes.io/projected/dbb96edc-2ef9-4042-9392-a4172d877678-kube-api-access-5xqrz\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.736423 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" event={"ID":"dbb96edc-2ef9-4042-9392-a4172d877678","Type":"ContainerDied","Data":"b6709d3a91f11da5234ff3738243976400d1ef8499d8a11748721206ab590bb1"} Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.736707 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6709d3a91f11da5234ff3738243976400d1ef8499d8a11748721206ab590bb1" Jan 27 00:16:15 crc kubenswrapper[4786]: I0127 00:16:15.736560 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.605919 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x"] Jan 27 00:16:17 crc kubenswrapper[4786]: E0127 00:16:17.606098 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="util" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.606109 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="util" Jan 27 00:16:17 crc kubenswrapper[4786]: E0127 00:16:17.606123 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="extract" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.606129 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="extract" Jan 27 00:16:17 crc kubenswrapper[4786]: E0127 00:16:17.606136 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="pull" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.606154 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="pull" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.606252 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb96edc-2ef9-4042-9392-a4172d877678" containerName="extract" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.606917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.609623 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.617313 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x"] Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.738366 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.738646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v9s\" (UniqueName: \"kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.738749 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.839808 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.839950 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5v9s\" (UniqueName: \"kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.840009 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.840355 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.840376 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.857742 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5v9s\" (UniqueName: \"kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:17 crc kubenswrapper[4786]: I0127 00:16:17.923839 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.161224 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x"] Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.424959 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6"] Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.426652 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.440986 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6"] Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.549730 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.549824 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8n7r\" (UniqueName: \"kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.549952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.651062 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.651135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8n7r\" (UniqueName: \"kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.651187 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.652003 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.652005 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.684375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8n7r\" (UniqueName: \"kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.759711 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerID="3123cf85cded420fbd0396f6676c2551e6e224f7a99f5415d87e3d5376a953e1" exitCode=0 Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.759818 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" event={"ID":"bcb2611d-11f3-4185-bea4-f995f1378bac","Type":"ContainerDied","Data":"3123cf85cded420fbd0396f6676c2551e6e224f7a99f5415d87e3d5376a953e1"} Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.759872 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" event={"ID":"bcb2611d-11f3-4185-bea4-f995f1378bac","Type":"ContainerStarted","Data":"b07c3be635da8613ab85b313a980c15b987f82837d817c7b91672b402a477051"} Jan 27 00:16:18 crc kubenswrapper[4786]: I0127 00:16:18.796412 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.038729 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6"] Jan 27 00:16:19 crc kubenswrapper[4786]: W0127 00:16:19.050999 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb648cac_9420_4f49_bacd_96220d1d201c.slice/crio-c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d WatchSource:0}: Error finding container c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d: Status 404 returned error can't find the container with id c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.768564 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerID="58d288cbd6b07d77143f4cd3d8a534fc3c13e018d2d85a1974609671834d571d" exitCode=0 Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.768699 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" event={"ID":"bcb2611d-11f3-4185-bea4-f995f1378bac","Type":"ContainerDied","Data":"58d288cbd6b07d77143f4cd3d8a534fc3c13e018d2d85a1974609671834d571d"} Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.770375 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb648cac-9420-4f49-bacd-96220d1d201c" containerID="ea42e75acf839d536bea1b593f70bccf947c253fe9336b9b6b29bd8ac88bf59f" exitCode=0 Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.770405 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" event={"ID":"cb648cac-9420-4f49-bacd-96220d1d201c","Type":"ContainerDied","Data":"ea42e75acf839d536bea1b593f70bccf947c253fe9336b9b6b29bd8ac88bf59f"} Jan 27 00:16:19 crc kubenswrapper[4786]: I0127 00:16:19.770423 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" event={"ID":"cb648cac-9420-4f49-bacd-96220d1d201c","Type":"ContainerStarted","Data":"c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d"} Jan 27 00:16:20 crc kubenswrapper[4786]: I0127 00:16:20.786728 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerID="c176bf13f91dc903e06823a566c38fa937855c1dc8ce91776458af90ae7cc278" exitCode=0 Jan 27 00:16:20 crc kubenswrapper[4786]: I0127 00:16:20.786804 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" event={"ID":"bcb2611d-11f3-4185-bea4-f995f1378bac","Type":"ContainerDied","Data":"c176bf13f91dc903e06823a566c38fa937855c1dc8ce91776458af90ae7cc278"} Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.150012 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.225448 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b"] Jan 27 00:16:22 crc kubenswrapper[4786]: E0127 00:16:22.225642 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="util" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.225654 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="util" Jan 27 00:16:22 crc kubenswrapper[4786]: E0127 00:16:22.225664 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="pull" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.225669 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="pull" Jan 27 00:16:22 crc kubenswrapper[4786]: E0127 00:16:22.225685 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="extract" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.225692 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="extract" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.225779 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb2611d-11f3-4185-bea4-f995f1378bac" containerName="extract" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.226420 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.257096 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b"] Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.297936 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util\") pod \"bcb2611d-11f3-4185-bea4-f995f1378bac\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.298107 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5v9s\" (UniqueName: \"kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s\") pod \"bcb2611d-11f3-4185-bea4-f995f1378bac\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.298155 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle\") pod \"bcb2611d-11f3-4185-bea4-f995f1378bac\" (UID: \"bcb2611d-11f3-4185-bea4-f995f1378bac\") " Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.299289 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle" (OuterVolumeSpecName: "bundle") pod "bcb2611d-11f3-4185-bea4-f995f1378bac" (UID: "bcb2611d-11f3-4185-bea4-f995f1378bac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.312821 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util" (OuterVolumeSpecName: "util") pod "bcb2611d-11f3-4185-bea4-f995f1378bac" (UID: "bcb2611d-11f3-4185-bea4-f995f1378bac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.319703 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s" (OuterVolumeSpecName: "kube-api-access-j5v9s") pod "bcb2611d-11f3-4185-bea4-f995f1378bac" (UID: "bcb2611d-11f3-4185-bea4-f995f1378bac"). InnerVolumeSpecName "kube-api-access-j5v9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399584 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399627 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rqr\" (UniqueName: \"kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399666 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399740 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-util\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399755 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5v9s\" (UniqueName: \"kubernetes.io/projected/bcb2611d-11f3-4185-bea4-f995f1378bac-kube-api-access-j5v9s\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.399766 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcb2611d-11f3-4185-bea4-f995f1378bac-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.500798 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.500860 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8rqr\" (UniqueName: \"kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.500909 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.501352 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.501420 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.521037 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8rqr\" (UniqueName: \"kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.540177 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.800823 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" event={"ID":"bcb2611d-11f3-4185-bea4-f995f1378bac","Type":"ContainerDied","Data":"b07c3be635da8613ab85b313a980c15b987f82837d817c7b91672b402a477051"} Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.801147 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07c3be635da8613ab85b313a980c15b987f82837d817c7b91672b402a477051" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.801219 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x" Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.813830 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb648cac-9420-4f49-bacd-96220d1d201c" containerID="b223653ab4ae48e6f5b51157002c4a37675132c386fefd732420f00884560919" exitCode=0 Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.813902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" event={"ID":"cb648cac-9420-4f49-bacd-96220d1d201c","Type":"ContainerDied","Data":"b223653ab4ae48e6f5b51157002c4a37675132c386fefd732420f00884560919"} Jan 27 00:16:22 crc kubenswrapper[4786]: I0127 00:16:22.896068 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b"] Jan 27 00:16:23 crc kubenswrapper[4786]: I0127 00:16:23.820980 4786 generic.go:334] "Generic (PLEG): container finished" podID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerID="dd7dd8a925e19ff7058f9994226b7cdb3a55895efc7e80e20d71e90ced53c083" exitCode=0 Jan 27 00:16:23 crc kubenswrapper[4786]: I0127 00:16:23.821067 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerDied","Data":"dd7dd8a925e19ff7058f9994226b7cdb3a55895efc7e80e20d71e90ced53c083"} Jan 27 00:16:23 crc kubenswrapper[4786]: I0127 00:16:23.821466 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerStarted","Data":"ff5fbcb2e82b856e4e75c1d5196f889ad5eada5f211b1ecd564af4dcede404a7"} Jan 27 00:16:23 crc kubenswrapper[4786]: I0127 00:16:23.824949 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb648cac-9420-4f49-bacd-96220d1d201c" containerID="5373239375eec2fbc5ae77e79c2d4fcdfd78f5d84f0f5a77346bbecc90422d9a" exitCode=0 Jan 27 00:16:23 crc kubenswrapper[4786]: I0127 00:16:23.824992 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" event={"ID":"cb648cac-9420-4f49-bacd-96220d1d201c","Type":"ContainerDied","Data":"5373239375eec2fbc5ae77e79c2d4fcdfd78f5d84f0f5a77346bbecc90422d9a"} Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.258714 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.438684 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util\") pod \"cb648cac-9420-4f49-bacd-96220d1d201c\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.438738 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle\") pod \"cb648cac-9420-4f49-bacd-96220d1d201c\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.438795 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8n7r\" (UniqueName: \"kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r\") pod \"cb648cac-9420-4f49-bacd-96220d1d201c\" (UID: \"cb648cac-9420-4f49-bacd-96220d1d201c\") " Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.439212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle" (OuterVolumeSpecName: "bundle") pod "cb648cac-9420-4f49-bacd-96220d1d201c" (UID: "cb648cac-9420-4f49-bacd-96220d1d201c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.452327 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r" (OuterVolumeSpecName: "kube-api-access-g8n7r") pod "cb648cac-9420-4f49-bacd-96220d1d201c" (UID: "cb648cac-9420-4f49-bacd-96220d1d201c"). InnerVolumeSpecName "kube-api-access-g8n7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.457545 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util" (OuterVolumeSpecName: "util") pod "cb648cac-9420-4f49-bacd-96220d1d201c" (UID: "cb648cac-9420-4f49-bacd-96220d1d201c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.539546 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.539599 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8n7r\" (UniqueName: \"kubernetes.io/projected/cb648cac-9420-4f49-bacd-96220d1d201c-kube-api-access-g8n7r\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.539616 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb648cac-9420-4f49-bacd-96220d1d201c-util\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.841316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" event={"ID":"cb648cac-9420-4f49-bacd-96220d1d201c","Type":"ContainerDied","Data":"c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d"} Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.841359 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8db608606717521f7af670754c6ea1600b17de1f5eabc44dbc1c1ba915ec15d" Jan 27 00:16:25 crc kubenswrapper[4786]: I0127 00:16:25.841379 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.086271 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54"] Jan 27 00:16:28 crc kubenswrapper[4786]: E0127 00:16:28.088110 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="util" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.088214 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="util" Jan 27 00:16:28 crc kubenswrapper[4786]: E0127 00:16:28.088282 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="pull" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.088345 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="pull" Jan 27 00:16:28 crc kubenswrapper[4786]: E0127 00:16:28.088462 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="extract" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.088533 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="extract" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.088723 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb648cac-9420-4f49-bacd-96220d1d201c" containerName="extract" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.089307 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.091601 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.091676 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qbht8" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.091603 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.100190 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.221436 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.222231 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.224162 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fmpwp" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.226279 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.236403 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.240974 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.242022 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.264818 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.277301 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbz7\" (UniqueName: \"kubernetes.io/projected/e7226096-745e-4703-ace5-c936b6253c6b-kube-api-access-5bbz7\") pod \"obo-prometheus-operator-68bc856cb9-zbc54\" (UID: \"e7226096-745e-4703-ace5-c936b6253c6b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.378473 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.378825 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.378941 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbz7\" (UniqueName: \"kubernetes.io/projected/e7226096-745e-4703-ace5-c936b6253c6b-kube-api-access-5bbz7\") pod \"obo-prometheus-operator-68bc856cb9-zbc54\" (UID: \"e7226096-745e-4703-ace5-c936b6253c6b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.379027 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.379120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.410185 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-99tbv"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.410965 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.413914 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.414127 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-v7hdh" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.417207 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbz7\" (UniqueName: \"kubernetes.io/projected/e7226096-745e-4703-ace5-c936b6253c6b-kube-api-access-5bbz7\") pod \"obo-prometheus-operator-68bc856cb9-zbc54\" (UID: \"e7226096-745e-4703-ace5-c936b6253c6b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.430951 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-99tbv"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.480143 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.480202 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.480230 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.480254 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.483235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.484066 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.488228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a496e93b-aa8f-4501-ae30-38c707fb2367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77\" (UID: \"a496e93b-aa8f-4501-ae30-38c707fb2367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.488429 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fd33933-0292-4388-893a-e091b8f50b5e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx\" (UID: \"9fd33933-0292-4388-893a-e091b8f50b5e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.521793 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k92dn"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.522408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.528020 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-79vch" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.534313 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k92dn"] Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.544229 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.564025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.582220 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.582281 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkphd\" (UniqueName: \"kubernetes.io/projected/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-kube-api-access-hkphd\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.688491 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxv9j\" (UniqueName: \"kubernetes.io/projected/a1319c4e-44c7-4d01-a0b9-a79f7290e182-kube-api-access-gxv9j\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.688836 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.688881 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkphd\" (UniqueName: \"kubernetes.io/projected/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-kube-api-access-hkphd\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.688904 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1319c4e-44c7-4d01-a0b9-a79f7290e182-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.696323 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.710766 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.726394 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkphd\" (UniqueName: \"kubernetes.io/projected/c0ef0efb-5797-4b7f-a69c-8177fe9a148b-kube-api-access-hkphd\") pod \"observability-operator-59bdc8b94-99tbv\" (UID: \"c0ef0efb-5797-4b7f-a69c-8177fe9a148b\") " pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.743899 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.790727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1319c4e-44c7-4d01-a0b9-a79f7290e182-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.790813 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxv9j\" (UniqueName: \"kubernetes.io/projected/a1319c4e-44c7-4d01-a0b9-a79f7290e182-kube-api-access-gxv9j\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.792270 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1319c4e-44c7-4d01-a0b9-a79f7290e182-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.825758 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxv9j\" (UniqueName: \"kubernetes.io/projected/a1319c4e-44c7-4d01-a0b9-a79f7290e182-kube-api-access-gxv9j\") pod \"perses-operator-5bf474d74f-k92dn\" (UID: \"a1319c4e-44c7-4d01-a0b9-a79f7290e182\") " pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.843980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.870435 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77"] Jan 27 00:16:28 crc kubenswrapper[4786]: W0127 00:16:28.870915 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda496e93b_aa8f_4501_ae30_38c707fb2367.slice/crio-4fd1e435c8418f4e7770868203153bdabd6de2c630b39ed210ab52ad88cb7389 WatchSource:0}: Error finding container 4fd1e435c8418f4e7770868203153bdabd6de2c630b39ed210ab52ad88cb7389: Status 404 returned error can't find the container with id 4fd1e435c8418f4e7770868203153bdabd6de2c630b39ed210ab52ad88cb7389 Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.874454 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerStarted","Data":"cf247481a1504cd22b38bc870faaba447f7c673ff35647cc35239eadca3d8030"} Jan 27 00:16:28 crc kubenswrapper[4786]: I0127 00:16:28.911025 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx"] Jan 27 00:16:28 crc kubenswrapper[4786]: W0127 00:16:28.928276 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fd33933_0292_4388_893a_e091b8f50b5e.slice/crio-5017901f0314a88794256df26189f120a51bbf5b750815757c6d9219e6dc9a3a WatchSource:0}: Error finding container 5017901f0314a88794256df26189f120a51bbf5b750815757c6d9219e6dc9a3a: Status 404 returned error can't find the container with id 5017901f0314a88794256df26189f120a51bbf5b750815757c6d9219e6dc9a3a Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.003823 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54"] Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.062642 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-99tbv"] Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.116631 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k92dn"] Jan 27 00:16:29 crc kubenswrapper[4786]: W0127 00:16:29.123771 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1319c4e_44c7_4d01_a0b9_a79f7290e182.slice/crio-b2346f3d44d3d763debf9d162f6a9c00eae4400baca13a600c4457837cc1d1e5 WatchSource:0}: Error finding container b2346f3d44d3d763debf9d162f6a9c00eae4400baca13a600c4457837cc1d1e5: Status 404 returned error can't find the container with id b2346f3d44d3d763debf9d162f6a9c00eae4400baca13a600c4457837cc1d1e5 Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.880298 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" event={"ID":"9fd33933-0292-4388-893a-e091b8f50b5e","Type":"ContainerStarted","Data":"5017901f0314a88794256df26189f120a51bbf5b750815757c6d9219e6dc9a3a"} Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.881460 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" event={"ID":"c0ef0efb-5797-4b7f-a69c-8177fe9a148b","Type":"ContainerStarted","Data":"1b833779e394644bdafadc290bf4cfde01425e8ebf7d01e9472303a472912c9e"} Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.882762 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" event={"ID":"a1319c4e-44c7-4d01-a0b9-a79f7290e182","Type":"ContainerStarted","Data":"b2346f3d44d3d763debf9d162f6a9c00eae4400baca13a600c4457837cc1d1e5"} Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.883901 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" event={"ID":"a496e93b-aa8f-4501-ae30-38c707fb2367","Type":"ContainerStarted","Data":"4fd1e435c8418f4e7770868203153bdabd6de2c630b39ed210ab52ad88cb7389"} Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.885392 4786 generic.go:334] "Generic (PLEG): container finished" podID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerID="cf247481a1504cd22b38bc870faaba447f7c673ff35647cc35239eadca3d8030" exitCode=0 Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.885453 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerDied","Data":"cf247481a1504cd22b38bc870faaba447f7c673ff35647cc35239eadca3d8030"} Jan 27 00:16:29 crc kubenswrapper[4786]: I0127 00:16:29.886311 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" event={"ID":"e7226096-745e-4703-ace5-c936b6253c6b","Type":"ContainerStarted","Data":"3f03818a4b7c43c1aa5bf38227fc5ac1339ad6d579ea333942599e9fe572cbbb"} Jan 27 00:16:30 crc kubenswrapper[4786]: I0127 00:16:30.896102 4786 generic.go:334] "Generic (PLEG): container finished" podID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerID="3976ac0fc4df231792bd581c3cb8977901cbc739bdc18e63f1fd8e3e86125b8d" exitCode=0 Jan 27 00:16:30 crc kubenswrapper[4786]: I0127 00:16:30.896210 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerDied","Data":"3976ac0fc4df231792bd581c3cb8977901cbc739bdc18e63f1fd8e3e86125b8d"} Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.664742 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.759893 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8rqr\" (UniqueName: \"kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr\") pod \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.759961 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util\") pod \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.760033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle\") pod \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\" (UID: \"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be\") " Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.765307 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr" (OuterVolumeSpecName: "kube-api-access-t8rqr") pod "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" (UID: "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be"). InnerVolumeSpecName "kube-api-access-t8rqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.766762 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle" (OuterVolumeSpecName: "bundle") pod "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" (UID: "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.772020 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util" (OuterVolumeSpecName: "util") pod "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" (UID: "ee3c26cf-dfd2-47d5-85e9-9ec9e33478be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.861341 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.861385 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8rqr\" (UniqueName: \"kubernetes.io/projected/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-kube-api-access-t8rqr\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.861398 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3c26cf-dfd2-47d5-85e9-9ec9e33478be-util\") on node \"crc\" DevicePath \"\"" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.917832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" event={"ID":"ee3c26cf-dfd2-47d5-85e9-9ec9e33478be","Type":"ContainerDied","Data":"ff5fbcb2e82b856e4e75c1d5196f889ad5eada5f211b1ecd564af4dcede404a7"} Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.917868 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5fbcb2e82b856e4e75c1d5196f889ad5eada5f211b1ecd564af4dcede404a7" Jan 27 00:16:33 crc kubenswrapper[4786]: I0127 00:16:33.917923 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.382625 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-dk6ns"] Jan 27 00:16:34 crc kubenswrapper[4786]: E0127 00:16:34.382850 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="pull" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.382868 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="pull" Jan 27 00:16:34 crc kubenswrapper[4786]: E0127 00:16:34.382891 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="util" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.382899 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="util" Jan 27 00:16:34 crc kubenswrapper[4786]: E0127 00:16:34.382912 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="extract" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.382921 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="extract" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.383033 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3c26cf-dfd2-47d5-85e9-9ec9e33478be" containerName="extract" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.383397 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.386695 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.387588 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.421004 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-5mf4f" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.437204 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-dk6ns"] Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.469737 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr6cm\" (UniqueName: \"kubernetes.io/projected/e2d6f91c-7bef-480c-93dc-8323e08d3e8b-kube-api-access-jr6cm\") pod \"interconnect-operator-5bb49f789d-dk6ns\" (UID: \"e2d6f91c-7bef-480c-93dc-8323e08d3e8b\") " pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.570460 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr6cm\" (UniqueName: \"kubernetes.io/projected/e2d6f91c-7bef-480c-93dc-8323e08d3e8b-kube-api-access-jr6cm\") pod \"interconnect-operator-5bb49f789d-dk6ns\" (UID: \"e2d6f91c-7bef-480c-93dc-8323e08d3e8b\") " pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.587924 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr6cm\" (UniqueName: \"kubernetes.io/projected/e2d6f91c-7bef-480c-93dc-8323e08d3e8b-kube-api-access-jr6cm\") pod \"interconnect-operator-5bb49f789d-dk6ns\" (UID: \"e2d6f91c-7bef-480c-93dc-8323e08d3e8b\") " pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" Jan 27 00:16:34 crc kubenswrapper[4786]: I0127 00:16:34.725849 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" Jan 27 00:16:37 crc kubenswrapper[4786]: I0127 00:16:37.877557 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-86c976bb94-542fx"] Jan 27 00:16:37 crc kubenswrapper[4786]: I0127 00:16:37.878814 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:37 crc kubenswrapper[4786]: I0127 00:16:37.882504 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 27 00:16:37 crc kubenswrapper[4786]: I0127 00:16:37.887300 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-rz9sn" Jan 27 00:16:37 crc kubenswrapper[4786]: I0127 00:16:37.898496 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-86c976bb94-542fx"] Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.014334 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-apiservice-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.014405 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-webhook-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.014437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlfl\" (UniqueName: \"kubernetes.io/projected/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-kube-api-access-gzlfl\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.115408 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-apiservice-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.115479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-webhook-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.115507 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzlfl\" (UniqueName: \"kubernetes.io/projected/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-kube-api-access-gzlfl\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.130696 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-apiservice-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.130696 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-webhook-cert\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.135063 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzlfl\" (UniqueName: \"kubernetes.io/projected/fcaae56a-55fc-4679-b4bf-4d6e0d4050f7-kube-api-access-gzlfl\") pod \"elastic-operator-86c976bb94-542fx\" (UID: \"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7\") " pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:38 crc kubenswrapper[4786]: I0127 00:16:38.202000 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-86c976bb94-542fx" Jan 27 00:16:39 crc kubenswrapper[4786]: I0127 00:16:39.228414 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-86c976bb94-542fx"] Jan 27 00:16:39 crc kubenswrapper[4786]: I0127 00:16:39.512284 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-dk6ns"] Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.023213 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" event={"ID":"e2d6f91c-7bef-480c-93dc-8323e08d3e8b","Type":"ContainerStarted","Data":"6803926c7d5c8a3458424d2d8b7fd85d4a7ae1213ec4a5c1ff68252dc7189ece"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.024771 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" event={"ID":"9fd33933-0292-4388-893a-e091b8f50b5e","Type":"ContainerStarted","Data":"1a9f11fc0936adc2f6cbd585211f6e905b2ae4b21461baf65bd7a8328fc3c907"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.027295 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-86c976bb94-542fx" event={"ID":"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7","Type":"ContainerStarted","Data":"56cd3d3f658ac86083b8763b9240120316b76f97d3348ef3632752aeae71ea34"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.029547 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" event={"ID":"a1319c4e-44c7-4d01-a0b9-a79f7290e182","Type":"ContainerStarted","Data":"25ffb8ec4645985f611674259afdde61bf1112f63be6f899aa667b98e515436d"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.029709 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.031087 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" event={"ID":"c0ef0efb-5797-4b7f-a69c-8177fe9a148b","Type":"ContainerStarted","Data":"2b53dd99f34aa7e981b4a51b217beb3955b2a12f94bf813a9caa1991b8d295a6"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.031276 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.033237 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" event={"ID":"a496e93b-aa8f-4501-ae30-38c707fb2367","Type":"ContainerStarted","Data":"7812fe9e1f52bcfd263025c03137ad1f966eeadca5a294d052e8aae36488a585"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.033919 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.035425 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" event={"ID":"e7226096-745e-4703-ace5-c936b6253c6b","Type":"ContainerStarted","Data":"e9fd83f4084e7186610623a79479bb516100d3975a7df5bbe6e948f33302fef2"} Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.043003 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx" podStartSLOduration=2.091208003 podStartE2EDuration="12.042986839s" podCreationTimestamp="2026-01-27 00:16:28 +0000 UTC" firstStartedPulling="2026-01-27 00:16:28.932647823 +0000 UTC m=+634.416334866" lastFinishedPulling="2026-01-27 00:16:38.884426659 +0000 UTC m=+644.368113702" observedRunningTime="2026-01-27 00:16:40.042927348 +0000 UTC m=+645.526614391" watchObservedRunningTime="2026-01-27 00:16:40.042986839 +0000 UTC m=+645.526673882" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.068449 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" podStartSLOduration=2.307221844 podStartE2EDuration="12.068432057s" podCreationTimestamp="2026-01-27 00:16:28 +0000 UTC" firstStartedPulling="2026-01-27 00:16:29.126406102 +0000 UTC m=+634.610093145" lastFinishedPulling="2026-01-27 00:16:38.887616315 +0000 UTC m=+644.371303358" observedRunningTime="2026-01-27 00:16:40.063621512 +0000 UTC m=+645.547308575" watchObservedRunningTime="2026-01-27 00:16:40.068432057 +0000 UTC m=+645.552119110" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.104239 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77" podStartSLOduration=2.099650028 podStartE2EDuration="12.104219908s" podCreationTimestamp="2026-01-27 00:16:28 +0000 UTC" firstStartedPulling="2026-01-27 00:16:28.898231864 +0000 UTC m=+634.381918907" lastFinishedPulling="2026-01-27 00:16:38.902801744 +0000 UTC m=+644.386488787" observedRunningTime="2026-01-27 00:16:40.09999493 +0000 UTC m=+645.583681973" watchObservedRunningTime="2026-01-27 00:16:40.104219908 +0000 UTC m=+645.587906961" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.143767 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-99tbv" podStartSLOduration=2.309958187 podStartE2EDuration="12.143742251s" podCreationTimestamp="2026-01-27 00:16:28 +0000 UTC" firstStartedPulling="2026-01-27 00:16:29.079493086 +0000 UTC m=+634.563180129" lastFinishedPulling="2026-01-27 00:16:38.91327715 +0000 UTC m=+644.396964193" observedRunningTime="2026-01-27 00:16:40.141971037 +0000 UTC m=+645.625658080" watchObservedRunningTime="2026-01-27 00:16:40.143742251 +0000 UTC m=+645.627429294" Jan 27 00:16:40 crc kubenswrapper[4786]: I0127 00:16:40.173711 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zbc54" podStartSLOduration=2.325233128 podStartE2EDuration="12.173689765s" podCreationTimestamp="2026-01-27 00:16:28 +0000 UTC" firstStartedPulling="2026-01-27 00:16:29.039171999 +0000 UTC m=+634.522859042" lastFinishedPulling="2026-01-27 00:16:38.887628646 +0000 UTC m=+644.371315679" observedRunningTime="2026-01-27 00:16:40.166800767 +0000 UTC m=+645.650487820" watchObservedRunningTime="2026-01-27 00:16:40.173689765 +0000 UTC m=+645.657376808" Jan 27 00:16:43 crc kubenswrapper[4786]: I0127 00:16:43.055594 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-86c976bb94-542fx" event={"ID":"fcaae56a-55fc-4679-b4bf-4d6e0d4050f7","Type":"ContainerStarted","Data":"46f1fb6cbd7edbc0aca11728c090eee1ed2bdfeb9e50cc9b5d84ec72dc627842"} Jan 27 00:16:43 crc kubenswrapper[4786]: I0127 00:16:43.076495 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-86c976bb94-542fx" podStartSLOduration=3.000380944 podStartE2EDuration="6.076479042s" podCreationTimestamp="2026-01-27 00:16:37 +0000 UTC" firstStartedPulling="2026-01-27 00:16:39.243360413 +0000 UTC m=+644.727047456" lastFinishedPulling="2026-01-27 00:16:42.319458511 +0000 UTC m=+647.803145554" observedRunningTime="2026-01-27 00:16:43.070103068 +0000 UTC m=+648.553790121" watchObservedRunningTime="2026-01-27 00:16:43.076479042 +0000 UTC m=+648.560166085" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.738655 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.739848 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.741215 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.741499 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.741661 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.742974 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.743149 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.743276 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.743460 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-8tl8r" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.743686 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.743826 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.755426 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.864453 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9e18f628-3329-4df0-a091-50b327ec89cd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.864560 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.864775 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865060 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865213 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865257 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865288 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865346 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865431 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865466 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865555 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.865731 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967039 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967099 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967139 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967166 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967806 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968129 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967738 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967948 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.967637 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968642 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968649 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968778 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9e18f628-3329-4df0-a091-50b327ec89cd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968951 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.968981 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.969061 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.969090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.969121 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.969896 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.970619 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.975617 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9e18f628-3329-4df0-a091-50b327ec89cd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.976054 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.978109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.979308 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.986297 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.986335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.986335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:47 crc kubenswrapper[4786]: I0127 00:16:47.987133 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9e18f628-3329-4df0-a091-50b327ec89cd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9e18f628-3329-4df0-a091-50b327ec89cd\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:48 crc kubenswrapper[4786]: I0127 00:16:48.057831 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:16:48 crc kubenswrapper[4786]: I0127 00:16:48.087874 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" event={"ID":"e2d6f91c-7bef-480c-93dc-8323e08d3e8b","Type":"ContainerStarted","Data":"c71dae9ebb925db059817d85d5da54d7dba8969c90638ab7b70d3f7484bc4346"} Jan 27 00:16:48 crc kubenswrapper[4786]: I0127 00:16:48.114536 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-dk6ns" podStartSLOduration=6.192805779 podStartE2EDuration="14.114516652s" podCreationTimestamp="2026-01-27 00:16:34 +0000 UTC" firstStartedPulling="2026-01-27 00:16:39.528372706 +0000 UTC m=+645.012059749" lastFinishedPulling="2026-01-27 00:16:47.450083579 +0000 UTC m=+652.933770622" observedRunningTime="2026-01-27 00:16:48.112063638 +0000 UTC m=+653.595750681" watchObservedRunningTime="2026-01-27 00:16:48.114516652 +0000 UTC m=+653.598203695" Jan 27 00:16:48 crc kubenswrapper[4786]: I0127 00:16:48.257149 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 27 00:16:48 crc kubenswrapper[4786]: W0127 00:16:48.267036 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e18f628_3329_4df0_a091_50b327ec89cd.slice/crio-4170e0b2dce771da9daca42370b1341cb7bf01931f3c58113e676114a57e6100 WatchSource:0}: Error finding container 4170e0b2dce771da9daca42370b1341cb7bf01931f3c58113e676114a57e6100: Status 404 returned error can't find the container with id 4170e0b2dce771da9daca42370b1341cb7bf01931f3c58113e676114a57e6100 Jan 27 00:16:48 crc kubenswrapper[4786]: I0127 00:16:48.856379 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-k92dn" Jan 27 00:16:49 crc kubenswrapper[4786]: I0127 00:16:49.107304 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9e18f628-3329-4df0-a091-50b327ec89cd","Type":"ContainerStarted","Data":"4170e0b2dce771da9daca42370b1341cb7bf01931f3c58113e676114a57e6100"} Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.472106 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.473496 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.475117 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.475497 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2hvtv" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.475623 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.476204 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.483319 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605112 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605154 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605183 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605202 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605219 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26g7\" (UniqueName: \"kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605263 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605278 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605362 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605394 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605414 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.605430 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.706939 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707170 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707469 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26g7\" (UniqueName: \"kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.708073 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.708525 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707773 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707897 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707625 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.708215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.708467 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.707771 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.708867 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.709630 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.709815 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.710746 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.710684 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.711204 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.711721 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.717152 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.723022 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26g7\" (UniqueName: \"kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.736213 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:57 crc kubenswrapper[4786]: I0127 00:16:57.791103 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.806697 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99"] Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.808404 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.811299 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.811866 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6dhp4" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.812038 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.811882 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99"] Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.941494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2acca51-1719-4a98-9943-1b8de5bff12b-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:16:59 crc kubenswrapper[4786]: I0127 00:16:59.941545 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b588\" (UniqueName: \"kubernetes.io/projected/c2acca51-1719-4a98-9943-1b8de5bff12b-kube-api-access-8b588\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:00 crc kubenswrapper[4786]: I0127 00:17:00.042967 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b588\" (UniqueName: \"kubernetes.io/projected/c2acca51-1719-4a98-9943-1b8de5bff12b-kube-api-access-8b588\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:00 crc kubenswrapper[4786]: I0127 00:17:00.043076 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2acca51-1719-4a98-9943-1b8de5bff12b-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:00 crc kubenswrapper[4786]: I0127 00:17:00.043479 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2acca51-1719-4a98-9943-1b8de5bff12b-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:00 crc kubenswrapper[4786]: I0127 00:17:00.060739 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b588\" (UniqueName: \"kubernetes.io/projected/c2acca51-1719-4a98-9943-1b8de5bff12b-kube-api-access-8b588\") pod \"cert-manager-operator-controller-manager-5446d6888b-hqh99\" (UID: \"c2acca51-1719-4a98-9943-1b8de5bff12b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:00 crc kubenswrapper[4786]: I0127 00:17:00.147478 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" Jan 27 00:17:07 crc kubenswrapper[4786]: I0127 00:17:07.800694 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:17:07 crc kubenswrapper[4786]: I0127 00:17:07.859627 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:17:07 crc kubenswrapper[4786]: E0127 00:17:07.948581 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Jan 27 00:17:07 crc kubenswrapper[4786]: E0127 00:17:07.948862 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(9e18f628-3329-4df0-a091-50b327ec89cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 00:17:07 crc kubenswrapper[4786]: E0127 00:17:07.951009 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9e18f628-3329-4df0-a091-50b327ec89cd" Jan 27 00:17:08 crc kubenswrapper[4786]: I0127 00:17:08.058672 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99"] Jan 27 00:17:08 crc kubenswrapper[4786]: W0127 00:17:08.064275 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2acca51_1719_4a98_9943_1b8de5bff12b.slice/crio-80925d6fd270b19d64abfcfa23a4cb5a3b8edc3f302e63e06c57bf004ee230a8 WatchSource:0}: Error finding container 80925d6fd270b19d64abfcfa23a4cb5a3b8edc3f302e63e06c57bf004ee230a8: Status 404 returned error can't find the container with id 80925d6fd270b19d64abfcfa23a4cb5a3b8edc3f302e63e06c57bf004ee230a8 Jan 27 00:17:08 crc kubenswrapper[4786]: I0127 00:17:08.217682 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" event={"ID":"c2acca51-1719-4a98-9943-1b8de5bff12b","Type":"ContainerStarted","Data":"80925d6fd270b19d64abfcfa23a4cb5a3b8edc3f302e63e06c57bf004ee230a8"} Jan 27 00:17:08 crc kubenswrapper[4786]: I0127 00:17:08.223082 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1bcc4e39-698a-4382-a9ca-2418bf18ff57","Type":"ContainerStarted","Data":"102f63b70656d322dc35dab09fcb60f4d4198e194bc17f75e2e1545bc28a81ab"} Jan 27 00:17:08 crc kubenswrapper[4786]: E0127 00:17:08.224928 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9e18f628-3329-4df0-a091-50b327ec89cd" Jan 27 00:17:08 crc kubenswrapper[4786]: I0127 00:17:08.414622 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 27 00:17:08 crc kubenswrapper[4786]: I0127 00:17:08.444777 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 27 00:17:09 crc kubenswrapper[4786]: E0127 00:17:09.230287 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9e18f628-3329-4df0-a091-50b327ec89cd" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.445934 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.447007 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.448876 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.448940 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.450974 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.468418 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.580910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.580950 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.580974 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.580991 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581039 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581107 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581144 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581177 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581203 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581232 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.581248 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5w2k\" (UniqueName: \"kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682086 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5w2k\" (UniqueName: \"kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682115 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682156 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682172 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682189 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682278 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682319 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682337 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682467 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.682509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.683353 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.683417 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.683446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.683985 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.684226 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.684269 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.684349 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.688009 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.688103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.699196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5w2k\" (UniqueName: \"kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k\") pod \"service-telemetry-operator-2-build\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:09 crc kubenswrapper[4786]: I0127 00:17:09.765109 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:10 crc kubenswrapper[4786]: I0127 00:17:10.225347 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:10 crc kubenswrapper[4786]: E0127 00:17:10.236066 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9e18f628-3329-4df0-a091-50b327ec89cd" Jan 27 00:17:11 crc kubenswrapper[4786]: I0127 00:17:11.249038 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc0de960-7291-4334-8bef-fa589aae1f97","Type":"ContainerStarted","Data":"7bbbd22549f41a3f817142453d316f90134e8b6d824ccac7ed27de008349898a"} Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.298750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" event={"ID":"c2acca51-1719-4a98-9943-1b8de5bff12b","Type":"ContainerStarted","Data":"270c83d3dbe8ad520052f632e7a749ed73d978e84d80a17a8ff28220147861e2"} Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.301121 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc0de960-7291-4334-8bef-fa589aae1f97","Type":"ContainerStarted","Data":"1528d7325d938ec52657a155f3502e6a20605510db4f826623aa483abff790a7"} Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.303316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1bcc4e39-698a-4382-a9ca-2418bf18ff57","Type":"ContainerStarted","Data":"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76"} Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.303490 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" containerName="manage-dockerfile" containerID="cri-o://c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76" gracePeriod=30 Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.340131 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hqh99" podStartSLOduration=10.042935106 podStartE2EDuration="18.340095793s" podCreationTimestamp="2026-01-27 00:16:59 +0000 UTC" firstStartedPulling="2026-01-27 00:17:08.067669353 +0000 UTC m=+673.551356386" lastFinishedPulling="2026-01-27 00:17:16.36483003 +0000 UTC m=+681.848517073" observedRunningTime="2026-01-27 00:17:17.330256974 +0000 UTC m=+682.813944037" watchObservedRunningTime="2026-01-27 00:17:17.340095793 +0000 UTC m=+682.823782856" Jan 27 00:17:17 crc kubenswrapper[4786]: E0127 00:17:17.439237 4786 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=7590965155756438792, SKID=, AKID=F0:0D:6D:88:24:7F:3D:CB:D1:6B:77:1C:09:0B:FF:9E:69:20:36:22 failed: x509: certificate signed by unknown authority" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.665907 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_1bcc4e39-698a-4382-a9ca-2418bf18ff57/manage-dockerfile/0.log" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.665998 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.737904 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.737948 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.737972 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.737993 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738036 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738093 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738143 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738150 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738184 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738201 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26g7\" (UniqueName: \"kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738281 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738581 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738699 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738713 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738726 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root\") pod \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\" (UID: \"1bcc4e39-698a-4382-a9ca-2418bf18ff57\") " Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738865 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738896 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738917 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.738980 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739141 4786 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739154 4786 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739168 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739189 4786 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739198 4786 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739206 4786 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1bcc4e39-698a-4382-a9ca-2418bf18ff57-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739214 4786 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.739222 4786 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bcc4e39-698a-4382-a9ca-2418bf18ff57-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.740304 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.743655 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-pull") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "builder-dockercfg-2hvtv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.743679 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7" (OuterVolumeSpecName: "kube-api-access-m26g7") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "kube-api-access-m26g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.747920 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-push") pod "1bcc4e39-698a-4382-a9ca-2418bf18ff57" (UID: "1bcc4e39-698a-4382-a9ca-2418bf18ff57"). InnerVolumeSpecName "builder-dockercfg-2hvtv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.840685 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26g7\" (UniqueName: \"kubernetes.io/projected/1bcc4e39-698a-4382-a9ca-2418bf18ff57-kube-api-access-m26g7\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.841255 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-push\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.841314 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/1bcc4e39-698a-4382-a9ca-2418bf18ff57-builder-dockercfg-2hvtv-pull\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:17 crc kubenswrapper[4786]: I0127 00:17:17.841373 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1bcc4e39-698a-4382-a9ca-2418bf18ff57-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.309860 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_1bcc4e39-698a-4382-a9ca-2418bf18ff57/manage-dockerfile/0.log" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.309900 4786 generic.go:334] "Generic (PLEG): container finished" podID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" containerID="c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76" exitCode=1 Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.309960 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.309996 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1bcc4e39-698a-4382-a9ca-2418bf18ff57","Type":"ContainerDied","Data":"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76"} Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.310025 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1bcc4e39-698a-4382-a9ca-2418bf18ff57","Type":"ContainerDied","Data":"102f63b70656d322dc35dab09fcb60f4d4198e194bc17f75e2e1545bc28a81ab"} Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.310042 4786 scope.go:117] "RemoveContainer" containerID="c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.327734 4786 scope.go:117] "RemoveContainer" containerID="c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76" Jan 27 00:17:18 crc kubenswrapper[4786]: E0127 00:17:18.328109 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76\": container with ID starting with c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76 not found: ID does not exist" containerID="c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.328135 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76"} err="failed to get container status \"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76\": rpc error: code = NotFound desc = could not find container \"c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76\": container with ID starting with c94302422a23bf78cdee01bd68942ff2737c44e6bb65186804e04d686e2f4f76 not found: ID does not exist" Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.362149 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.364829 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 27 00:17:18 crc kubenswrapper[4786]: I0127 00:17:18.483006 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:19 crc kubenswrapper[4786]: I0127 00:17:19.154994 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" path="/var/lib/kubelet/pods/1bcc4e39-698a-4382-a9ca-2418bf18ff57/volumes" Jan 27 00:17:19 crc kubenswrapper[4786]: I0127 00:17:19.316703 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="dc0de960-7291-4334-8bef-fa589aae1f97" containerName="git-clone" containerID="cri-o://1528d7325d938ec52657a155f3502e6a20605510db4f826623aa483abff790a7" gracePeriod=30 Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.323678 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_dc0de960-7291-4334-8bef-fa589aae1f97/git-clone/0.log" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.323976 4786 generic.go:334] "Generic (PLEG): container finished" podID="dc0de960-7291-4334-8bef-fa589aae1f97" containerID="1528d7325d938ec52657a155f3502e6a20605510db4f826623aa483abff790a7" exitCode=1 Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.324006 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc0de960-7291-4334-8bef-fa589aae1f97","Type":"ContainerDied","Data":"1528d7325d938ec52657a155f3502e6a20605510db4f826623aa483abff790a7"} Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.345084 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.345158 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.383240 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_dc0de960-7291-4334-8bef-fa589aae1f97/git-clone/0.log" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.383340 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492379 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492433 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492454 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492495 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5w2k\" (UniqueName: \"kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492533 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492595 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492628 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492649 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492668 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492705 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492722 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492813 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull\") pod \"dc0de960-7291-4334-8bef-fa589aae1f97\" (UID: \"dc0de960-7291-4334-8bef-fa589aae1f97\") " Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.492896 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493056 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493096 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493395 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493407 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493420 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493191 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493455 4786 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493438 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.493527 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.498860 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-push") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "builder-dockercfg-2hvtv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.498902 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k" (OuterVolumeSpecName: "kube-api-access-f5w2k") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "kube-api-access-f5w2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.498900 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-pull") pod "dc0de960-7291-4334-8bef-fa589aae1f97" (UID: "dc0de960-7291-4334-8bef-fa589aae1f97"). InnerVolumeSpecName "builder-dockercfg-2hvtv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594284 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-push\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594346 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594360 4786 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594369 4786 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594380 4786 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc0de960-7291-4334-8bef-fa589aae1f97-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594391 4786 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc0de960-7291-4334-8bef-fa589aae1f97-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594401 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/dc0de960-7291-4334-8bef-fa589aae1f97-builder-dockercfg-2hvtv-pull\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594411 4786 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594421 4786 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc0de960-7291-4334-8bef-fa589aae1f97-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:20 crc kubenswrapper[4786]: I0127 00:17:20.594430 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5w2k\" (UniqueName: \"kubernetes.io/projected/dc0de960-7291-4334-8bef-fa589aae1f97-kube-api-access-f5w2k\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.331044 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_dc0de960-7291-4334-8bef-fa589aae1f97/git-clone/0.log" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.331096 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc0de960-7291-4334-8bef-fa589aae1f97","Type":"ContainerDied","Data":"7bbbd22549f41a3f817142453d316f90134e8b6d824ccac7ed27de008349898a"} Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.331132 4786 scope.go:117] "RemoveContainer" containerID="1528d7325d938ec52657a155f3502e6a20605510db4f826623aa483abff790a7" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.331187 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.353362 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.358621 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.694889 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zhmht"] Jan 27 00:17:21 crc kubenswrapper[4786]: E0127 00:17:21.695258 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" containerName="manage-dockerfile" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.695290 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" containerName="manage-dockerfile" Jan 27 00:17:21 crc kubenswrapper[4786]: E0127 00:17:21.695317 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0de960-7291-4334-8bef-fa589aae1f97" containerName="git-clone" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.695331 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0de960-7291-4334-8bef-fa589aae1f97" containerName="git-clone" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.695517 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcc4e39-698a-4382-a9ca-2418bf18ff57" containerName="manage-dockerfile" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.695556 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0de960-7291-4334-8bef-fa589aae1f97" containerName="git-clone" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.697381 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.699399 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.699757 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ltz5w" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.700679 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.706472 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zhmht"] Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.807579 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.807805 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxzl\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-kube-api-access-fjxzl\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.909519 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxzl\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-kube-api-access-fjxzl\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.909626 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.925236 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxzl\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-kube-api-access-fjxzl\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:21 crc kubenswrapper[4786]: I0127 00:17:21.928120 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31f927e-7e82-4072-a4de-2e312a541c86-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zhmht\" (UID: \"f31f927e-7e82-4072-a4de-2e312a541c86\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:22 crc kubenswrapper[4786]: I0127 00:17:22.010958 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" Jan 27 00:17:22 crc kubenswrapper[4786]: I0127 00:17:22.217329 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zhmht"] Jan 27 00:17:22 crc kubenswrapper[4786]: W0127 00:17:22.225770 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31f927e_7e82_4072_a4de_2e312a541c86.slice/crio-d6b039dc1bb319fd5e53da089e5385a40d367fd75f06eeed2f9069f528c35171 WatchSource:0}: Error finding container d6b039dc1bb319fd5e53da089e5385a40d367fd75f06eeed2f9069f528c35171: Status 404 returned error can't find the container with id d6b039dc1bb319fd5e53da089e5385a40d367fd75f06eeed2f9069f528c35171 Jan 27 00:17:22 crc kubenswrapper[4786]: I0127 00:17:22.338140 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" event={"ID":"f31f927e-7e82-4072-a4de-2e312a541c86","Type":"ContainerStarted","Data":"d6b039dc1bb319fd5e53da089e5385a40d367fd75f06eeed2f9069f528c35171"} Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.186124 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0de960-7291-4334-8bef-fa589aae1f97" path="/var/lib/kubelet/pods/dc0de960-7291-4334-8bef-fa589aae1f97/volumes" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.282675 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdqcp"] Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.283480 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.288098 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8pvld" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.291986 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdqcp"] Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.326987 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.327101 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvh8\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-kube-api-access-6xvh8\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.427799 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvh8\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-kube-api-access-6xvh8\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.427897 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.448947 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.460430 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvh8\" (UniqueName: \"kubernetes.io/projected/5c20001c-cb4a-4cf6-8102-0273ce0f949a-kube-api-access-6xvh8\") pod \"cert-manager-webhook-f4fb5df64-hdqcp\" (UID: \"5c20001c-cb4a-4cf6-8102-0273ce0f949a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.609545 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:23 crc kubenswrapper[4786]: I0127 00:17:23.847761 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdqcp"] Jan 27 00:17:24 crc kubenswrapper[4786]: I0127 00:17:24.353167 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" event={"ID":"5c20001c-cb4a-4cf6-8102-0273ce0f949a","Type":"ContainerStarted","Data":"d99754d1942c2a8dde0167dde233eaf4554aea9293f7b88a994def72453e2f55"} Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.967646 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.974091 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.979162 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.979168 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.979274 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2hvtv" Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.979344 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Jan 27 00:17:29 crc kubenswrapper[4786]: I0127 00:17:29.989088 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019349 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbdg\" (UniqueName: \"kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019708 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019744 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019775 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019836 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019871 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019892 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019922 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019951 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.019981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.020005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121408 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121435 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121464 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121481 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121477 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121531 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121869 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121496 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbdg\" (UniqueName: \"kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.121995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122022 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122071 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122410 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122421 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122557 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122750 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.122884 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.128158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.128158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.139292 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbdg\" (UniqueName: \"kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg\") pod \"service-telemetry-operator-3-build\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.291052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.395309 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" event={"ID":"5c20001c-cb4a-4cf6-8102-0273ce0f949a","Type":"ContainerStarted","Data":"a18c63a47ccb4c8ce249a04b64d240c51bc4e9fa75ee5e4e272df1dff9d6ff8e"} Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.395456 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.396722 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" event={"ID":"f31f927e-7e82-4072-a4de-2e312a541c86","Type":"ContainerStarted","Data":"e3d32f2ea373d5b2c165163618ac7b2ecc4241ab20bc4d8d606a1d4a6e80c5e1"} Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.398140 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9e18f628-3329-4df0-a091-50b327ec89cd","Type":"ContainerStarted","Data":"65df07d5f4faa2f74dec23943819100aee8a96f8e98d983f61c1a7aec0db7a72"} Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.430416 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" podStartSLOduration=1.501273765 podStartE2EDuration="7.430396835s" podCreationTimestamp="2026-01-27 00:17:23 +0000 UTC" firstStartedPulling="2026-01-27 00:17:23.863281487 +0000 UTC m=+689.346968530" lastFinishedPulling="2026-01-27 00:17:29.792404557 +0000 UTC m=+695.276091600" observedRunningTime="2026-01-27 00:17:30.424559578 +0000 UTC m=+695.908246621" watchObservedRunningTime="2026-01-27 00:17:30.430396835 +0000 UTC m=+695.914083878" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.480138 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zhmht" podStartSLOduration=1.860178546 podStartE2EDuration="9.480115119s" podCreationTimestamp="2026-01-27 00:17:21 +0000 UTC" firstStartedPulling="2026-01-27 00:17:22.227011444 +0000 UTC m=+687.710698477" lastFinishedPulling="2026-01-27 00:17:29.846948007 +0000 UTC m=+695.330635050" observedRunningTime="2026-01-27 00:17:30.472536228 +0000 UTC m=+695.956223271" watchObservedRunningTime="2026-01-27 00:17:30.480115119 +0000 UTC m=+695.963802162" Jan 27 00:17:30 crc kubenswrapper[4786]: I0127 00:17:30.541590 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:30 crc kubenswrapper[4786]: W0127 00:17:30.552785 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod970ee46d_8087_4a82_8eca_f0aa98f4c5b9.slice/crio-149dbf5d8b712e06f59765aa167eb571312243b0321879791810d5ff42c327e3 WatchSource:0}: Error finding container 149dbf5d8b712e06f59765aa167eb571312243b0321879791810d5ff42c327e3: Status 404 returned error can't find the container with id 149dbf5d8b712e06f59765aa167eb571312243b0321879791810d5ff42c327e3 Jan 27 00:17:31 crc kubenswrapper[4786]: I0127 00:17:31.404686 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"970ee46d-8087-4a82-8eca-f0aa98f4c5b9","Type":"ContainerStarted","Data":"149dbf5d8b712e06f59765aa167eb571312243b0321879791810d5ff42c327e3"} Jan 27 00:17:31 crc kubenswrapper[4786]: I0127 00:17:31.407229 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e18f628-3329-4df0-a091-50b327ec89cd" containerID="65df07d5f4faa2f74dec23943819100aee8a96f8e98d983f61c1a7aec0db7a72" exitCode=0 Jan 27 00:17:31 crc kubenswrapper[4786]: I0127 00:17:31.407353 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9e18f628-3329-4df0-a091-50b327ec89cd","Type":"ContainerDied","Data":"65df07d5f4faa2f74dec23943819100aee8a96f8e98d983f61c1a7aec0db7a72"} Jan 27 00:17:33 crc kubenswrapper[4786]: E0127 00:17:33.591164 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e18f628_3329_4df0_a091_50b327ec89cd.slice/crio-99e58471e1d375b17d824e0c174bb82373474299bf3468a848053fa39edb5cd2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e18f628_3329_4df0_a091_50b327ec89cd.slice/crio-conmon-99e58471e1d375b17d824e0c174bb82373474299bf3468a848053fa39edb5cd2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 00:17:34 crc kubenswrapper[4786]: I0127 00:17:34.431337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"970ee46d-8087-4a82-8eca-f0aa98f4c5b9","Type":"ContainerStarted","Data":"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9"} Jan 27 00:17:34 crc kubenswrapper[4786]: I0127 00:17:34.432989 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e18f628-3329-4df0-a091-50b327ec89cd" containerID="99e58471e1d375b17d824e0c174bb82373474299bf3468a848053fa39edb5cd2" exitCode=0 Jan 27 00:17:34 crc kubenswrapper[4786]: I0127 00:17:34.433034 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9e18f628-3329-4df0-a091-50b327ec89cd","Type":"ContainerDied","Data":"99e58471e1d375b17d824e0c174bb82373474299bf3468a848053fa39edb5cd2"} Jan 27 00:17:34 crc kubenswrapper[4786]: E0127 00:17:34.509294 4786 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=7590965155756438792, SKID=, AKID=F0:0D:6D:88:24:7F:3D:CB:D1:6B:77:1C:09:0B:FF:9E:69:20:36:22 failed: x509: certificate signed by unknown authority" Jan 27 00:17:35 crc kubenswrapper[4786]: I0127 00:17:35.441152 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9e18f628-3329-4df0-a091-50b327ec89cd","Type":"ContainerStarted","Data":"c6aa9064d03488418ae4c75904c19de204a8492e2527f9b5cef6d5380360f223"} Jan 27 00:17:35 crc kubenswrapper[4786]: I0127 00:17:35.441836 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:17:35 crc kubenswrapper[4786]: I0127 00:17:35.483116 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.959020908 podStartE2EDuration="48.483098761s" podCreationTimestamp="2026-01-27 00:16:47 +0000 UTC" firstStartedPulling="2026-01-27 00:16:48.268897671 +0000 UTC m=+653.752584714" lastFinishedPulling="2026-01-27 00:17:29.792975524 +0000 UTC m=+695.276662567" observedRunningTime="2026-01-27 00:17:35.471469407 +0000 UTC m=+700.955156470" watchObservedRunningTime="2026-01-27 00:17:35.483098761 +0000 UTC m=+700.966785804" Jan 27 00:17:35 crc kubenswrapper[4786]: I0127 00:17:35.537633 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.450321 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" containerName="git-clone" containerID="cri-o://1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9" gracePeriod=30 Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.802474 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_970ee46d-8087-4a82-8eca-f0aa98f4c5b9/git-clone/0.log" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.802784 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912597 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912656 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912678 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912705 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912734 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912766 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912794 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912836 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912881 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912909 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912945 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lbdg\" (UniqueName: \"kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.912960 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir\") pod \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\" (UID: \"970ee46d-8087-4a82-8eca-f0aa98f4c5b9\") " Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913179 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913481 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913492 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913588 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913792 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.913986 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.914069 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.914166 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.914425 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.918535 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-push") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "builder-dockercfg-2hvtv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.918952 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-pull") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "builder-dockercfg-2hvtv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:36 crc kubenswrapper[4786]: I0127 00:17:36.919069 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg" (OuterVolumeSpecName: "kube-api-access-4lbdg") pod "970ee46d-8087-4a82-8eca-f0aa98f4c5b9" (UID: "970ee46d-8087-4a82-8eca-f0aa98f4c5b9"). InnerVolumeSpecName "kube-api-access-4lbdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014227 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014255 4786 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014264 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014273 4786 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014282 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-push\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014291 4786 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014299 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lbdg\" (UniqueName: \"kubernetes.io/projected/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-kube-api-access-4lbdg\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014307 4786 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014315 4786 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014322 4786 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014330 4786 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.014340 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/970ee46d-8087-4a82-8eca-f0aa98f4c5b9-builder-dockercfg-2hvtv-pull\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.456787 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_970ee46d-8087-4a82-8eca-f0aa98f4c5b9/git-clone/0.log" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.456850 4786 generic.go:334] "Generic (PLEG): container finished" podID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" containerID="1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9" exitCode=1 Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.456890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"970ee46d-8087-4a82-8eca-f0aa98f4c5b9","Type":"ContainerDied","Data":"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9"} Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.456925 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"970ee46d-8087-4a82-8eca-f0aa98f4c5b9","Type":"ContainerDied","Data":"149dbf5d8b712e06f59765aa167eb571312243b0321879791810d5ff42c327e3"} Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.456948 4786 scope.go:117] "RemoveContainer" containerID="1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.457100 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.476639 4786 scope.go:117] "RemoveContainer" containerID="1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9" Jan 27 00:17:37 crc kubenswrapper[4786]: E0127 00:17:37.477160 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9\": container with ID starting with 1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9 not found: ID does not exist" containerID="1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.477206 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9"} err="failed to get container status \"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9\": rpc error: code = NotFound desc = could not find container \"1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9\": container with ID starting with 1e903a16d1c1f32435ee4e84286fb1ed95489921a677b1d30a4751e56c4608d9 not found: ID does not exist" Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.491742 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:37 crc kubenswrapper[4786]: I0127 00:17:37.495694 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 27 00:17:38 crc kubenswrapper[4786]: I0127 00:17:38.613675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdqcp" Jan 27 00:17:39 crc kubenswrapper[4786]: I0127 00:17:39.158487 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" path="/var/lib/kubelet/pods/970ee46d-8087-4a82-8eca-f0aa98f4c5b9/volumes" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.628196 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vnqb7"] Jan 27 00:17:40 crc kubenswrapper[4786]: E0127 00:17:40.628504 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" containerName="git-clone" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.628521 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" containerName="git-clone" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.628697 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="970ee46d-8087-4a82-8eca-f0aa98f4c5b9" containerName="git-clone" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.629197 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.631248 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-27n7z" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.640711 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vnqb7"] Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.760276 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-bound-sa-token\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.760383 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dqc\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-kube-api-access-25dqc\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.861220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dqc\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-kube-api-access-25dqc\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.861645 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-bound-sa-token\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.885818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-bound-sa-token\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.886088 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dqc\" (UniqueName: \"kubernetes.io/projected/71612731-4e8e-4fd0-bce4-a2c7b9bd752a-kube-api-access-25dqc\") pod \"cert-manager-86cb77c54b-vnqb7\" (UID: \"71612731-4e8e-4fd0-bce4-a2c7b9bd752a\") " pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:40 crc kubenswrapper[4786]: I0127 00:17:40.946404 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-vnqb7" Jan 27 00:17:41 crc kubenswrapper[4786]: W0127 00:17:41.373353 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71612731_4e8e_4fd0_bce4_a2c7b9bd752a.slice/crio-fb7ce00a5e48436a991a5ef58df770191d80824a25983ef6aaccaf8a93de9e5d WatchSource:0}: Error finding container fb7ce00a5e48436a991a5ef58df770191d80824a25983ef6aaccaf8a93de9e5d: Status 404 returned error can't find the container with id fb7ce00a5e48436a991a5ef58df770191d80824a25983ef6aaccaf8a93de9e5d Jan 27 00:17:41 crc kubenswrapper[4786]: I0127 00:17:41.384044 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vnqb7"] Jan 27 00:17:41 crc kubenswrapper[4786]: I0127 00:17:41.481424 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-vnqb7" event={"ID":"71612731-4e8e-4fd0-bce4-a2c7b9bd752a","Type":"ContainerStarted","Data":"fb7ce00a5e48436a991a5ef58df770191d80824a25983ef6aaccaf8a93de9e5d"} Jan 27 00:17:42 crc kubenswrapper[4786]: I0127 00:17:42.489453 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-vnqb7" event={"ID":"71612731-4e8e-4fd0-bce4-a2c7b9bd752a","Type":"ContainerStarted","Data":"1504888e6cd4233ae69e0fe93e194e412b5f023469a06e434837a97854ef2f5b"} Jan 27 00:17:42 crc kubenswrapper[4786]: I0127 00:17:42.505042 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-vnqb7" podStartSLOduration=2.505023313 podStartE2EDuration="2.505023313s" podCreationTimestamp="2026-01-27 00:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 00:17:42.503758464 +0000 UTC m=+707.987445507" watchObservedRunningTime="2026-01-27 00:17:42.505023313 +0000 UTC m=+707.988710356" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.072094 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.074010 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.076028 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-sys-config" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.082819 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2hvtv" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.082843 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-global-ca" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.083770 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-ca" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.097787 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249424 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249465 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249495 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249649 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249672 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249692 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249723 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249738 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t69t\" (UniqueName: \"kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249796 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.249812 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.350915 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.350997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351085 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t69t\" (UniqueName: \"kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351160 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351299 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351340 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351371 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.351405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.352257 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.352329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.352433 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.352907 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.353228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.353300 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.354274 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.354672 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.354964 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.359506 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.359700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.366781 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t69t\" (UniqueName: \"kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t\") pod \"service-telemetry-operator-4-build\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.393628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:47 crc kubenswrapper[4786]: I0127 00:17:47.643083 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:48 crc kubenswrapper[4786]: I0127 00:17:48.137816 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="9e18f628-3329-4df0-a091-50b327ec89cd" containerName="elasticsearch" probeResult="failure" output=< Jan 27 00:17:48 crc kubenswrapper[4786]: {"timestamp": "2026-01-27T00:17:48+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 27 00:17:48 crc kubenswrapper[4786]: > Jan 27 00:17:48 crc kubenswrapper[4786]: I0127 00:17:48.535528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b35641e5-532a-4a1f-8656-4e5b264f888e","Type":"ContainerStarted","Data":"8901f2d3440e335c1538fc749a6fa4305d0d65ec4c4fd8affb64af3696589ac3"} Jan 27 00:17:49 crc kubenswrapper[4786]: I0127 00:17:49.547652 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b35641e5-532a-4a1f-8656-4e5b264f888e","Type":"ContainerStarted","Data":"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387"} Jan 27 00:17:49 crc kubenswrapper[4786]: E0127 00:17:49.612078 4786 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=7590965155756438792, SKID=, AKID=F0:0D:6D:88:24:7F:3D:CB:D1:6B:77:1C:09:0B:FF:9E:69:20:36:22 failed: x509: certificate signed by unknown authority" Jan 27 00:17:50 crc kubenswrapper[4786]: I0127 00:17:50.344884 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:17:50 crc kubenswrapper[4786]: I0127 00:17:50.345226 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:17:50 crc kubenswrapper[4786]: I0127 00:17:50.646654 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:51 crc kubenswrapper[4786]: I0127 00:17:51.561106 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="b35641e5-532a-4a1f-8656-4e5b264f888e" containerName="git-clone" containerID="cri-o://eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387" gracePeriod=30 Jan 27 00:17:51 crc kubenswrapper[4786]: I0127 00:17:51.945891 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b35641e5-532a-4a1f-8656-4e5b264f888e/git-clone/0.log" Jan 27 00:17:51 crc kubenswrapper[4786]: I0127 00:17:51.945959 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122027 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122108 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122245 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122285 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122304 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t69t\" (UniqueName: \"kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122325 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122345 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122370 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122400 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles\") pod \"b35641e5-532a-4a1f-8656-4e5b264f888e\" (UID: \"b35641e5-532a-4a1f-8656-4e5b264f888e\") " Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122641 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.122981 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.123598 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.123705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.123824 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.123873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124058 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124088 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124149 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124480 4786 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124503 4786 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124518 4786 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124531 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124546 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124558 4786 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b35641e5-532a-4a1f-8656-4e5b264f888e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124590 4786 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124602 4786 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b35641e5-532a-4a1f-8656-4e5b264f888e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.124614 4786 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b35641e5-532a-4a1f-8656-4e5b264f888e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.127961 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-push") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "builder-dockercfg-2hvtv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.128321 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t" (OuterVolumeSpecName: "kube-api-access-6t69t") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "kube-api-access-6t69t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.140865 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-pull") pod "b35641e5-532a-4a1f-8656-4e5b264f888e" (UID: "b35641e5-532a-4a1f-8656-4e5b264f888e"). InnerVolumeSpecName "builder-dockercfg-2hvtv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.227688 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-push\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.227744 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/b35641e5-532a-4a1f-8656-4e5b264f888e-builder-dockercfg-2hvtv-pull\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.227760 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t69t\" (UniqueName: \"kubernetes.io/projected/b35641e5-532a-4a1f-8656-4e5b264f888e-kube-api-access-6t69t\") on node \"crc\" DevicePath \"\"" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568199 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b35641e5-532a-4a1f-8656-4e5b264f888e/git-clone/0.log" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568263 4786 generic.go:334] "Generic (PLEG): container finished" podID="b35641e5-532a-4a1f-8656-4e5b264f888e" containerID="eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387" exitCode=1 Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568300 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b35641e5-532a-4a1f-8656-4e5b264f888e","Type":"ContainerDied","Data":"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387"} Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568342 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b35641e5-532a-4a1f-8656-4e5b264f888e","Type":"ContainerDied","Data":"8901f2d3440e335c1538fc749a6fa4305d0d65ec4c4fd8affb64af3696589ac3"} Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568360 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.568366 4786 scope.go:117] "RemoveContainer" containerID="eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.589855 4786 scope.go:117] "RemoveContainer" containerID="eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387" Jan 27 00:17:52 crc kubenswrapper[4786]: E0127 00:17:52.590353 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387\": container with ID starting with eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387 not found: ID does not exist" containerID="eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.590380 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387"} err="failed to get container status \"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387\": rpc error: code = NotFound desc = could not find container \"eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387\": container with ID starting with eab3c9edd3143bb5d8e60d9a4ef8a4e91e35624cbd64b9e5652f881a148ae387 not found: ID does not exist" Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.602855 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:52 crc kubenswrapper[4786]: I0127 00:17:52.612981 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 27 00:17:53 crc kubenswrapper[4786]: I0127 00:17:53.155415 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35641e5-532a-4a1f-8656-4e5b264f888e" path="/var/lib/kubelet/pods/b35641e5-532a-4a1f-8656-4e5b264f888e/volumes" Jan 27 00:17:53 crc kubenswrapper[4786]: I0127 00:17:53.497721 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.252377 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:02 crc kubenswrapper[4786]: E0127 00:18:02.253182 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35641e5-532a-4a1f-8656-4e5b264f888e" containerName="git-clone" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.253199 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35641e5-532a-4a1f-8656-4e5b264f888e" containerName="git-clone" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.253334 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35641e5-532a-4a1f-8656-4e5b264f888e" containerName="git-clone" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.254486 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.256954 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-ca" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.256955 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-global-ca" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.257184 4786 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2hvtv" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.266360 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-sys-config" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.276090 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.351670 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.351743 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.351777 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.351793 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.351952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352020 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352053 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qt7q\" (UniqueName: \"kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352110 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352136 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352151 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352196 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.352266 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.452977 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453056 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453102 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453119 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453144 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453162 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453237 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qt7q\" (UniqueName: \"kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453305 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453322 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453593 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453624 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453678 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453883 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.453938 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.454152 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.454176 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.454370 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.459883 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.460961 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.481290 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qt7q\" (UniqueName: \"kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q\") pod \"service-telemetry-operator-5-build\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.580697 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:02 crc kubenswrapper[4786]: I0127 00:18:02.809449 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:03 crc kubenswrapper[4786]: I0127 00:18:03.670589 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"2e7de0f7-9103-407d-8676-4512622ae53d","Type":"ContainerStarted","Data":"857f97cd8bd0c26517c7d468ef81d7c4feaebb2b98fdf26ee749f8da1e5c515e"} Jan 27 00:18:04 crc kubenswrapper[4786]: I0127 00:18:04.699004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"2e7de0f7-9103-407d-8676-4512622ae53d","Type":"ContainerStarted","Data":"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94"} Jan 27 00:18:04 crc kubenswrapper[4786]: E0127 00:18:04.767087 4786 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=7590965155756438792, SKID=, AKID=F0:0D:6D:88:24:7F:3D:CB:D1:6B:77:1C:09:0B:FF:9E:69:20:36:22 failed: x509: certificate signed by unknown authority" Jan 27 00:18:05 crc kubenswrapper[4786]: I0127 00:18:05.810468 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:06 crc kubenswrapper[4786]: I0127 00:18:06.713064 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="2e7de0f7-9103-407d-8676-4512622ae53d" containerName="git-clone" containerID="cri-o://4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94" gracePeriod=30 Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.091355 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_2e7de0f7-9103-407d-8676-4512622ae53d/git-clone/0.log" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.091784 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213042 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213149 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213173 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213244 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qt7q\" (UniqueName: \"kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213270 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213290 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213307 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213353 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213384 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213403 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213420 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache\") pod \"2e7de0f7-9103-407d-8676-4512622ae53d\" (UID: \"2e7de0f7-9103-407d-8676-4512622ae53d\") " Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213692 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.213931 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.214052 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.214454 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.214589 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.214644 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.214816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.215041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.215096 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.218628 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-push") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "builder-dockercfg-2hvtv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.219204 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q" (OuterVolumeSpecName: "kube-api-access-5qt7q") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "kube-api-access-5qt7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.219270 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull" (OuterVolumeSpecName: "builder-dockercfg-2hvtv-pull") pod "2e7de0f7-9103-407d-8676-4512622ae53d" (UID: "2e7de0f7-9103-407d-8676-4512622ae53d"). InnerVolumeSpecName "builder-dockercfg-2hvtv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315102 4786 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315366 4786 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315387 4786 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315405 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-pull\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-pull\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315423 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315440 4786 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e7de0f7-9103-407d-8676-4512622ae53d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315456 4786 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315477 4786 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315496 4786 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2hvtv-push\" (UniqueName: \"kubernetes.io/secret/2e7de0f7-9103-407d-8676-4512622ae53d-builder-dockercfg-2hvtv-push\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315515 4786 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7de0f7-9103-407d-8676-4512622ae53d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315532 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qt7q\" (UniqueName: \"kubernetes.io/projected/2e7de0f7-9103-407d-8676-4512622ae53d-kube-api-access-5qt7q\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.315549 4786 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e7de0f7-9103-407d-8676-4512622ae53d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729588 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_2e7de0f7-9103-407d-8676-4512622ae53d/git-clone/0.log" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729636 4786 generic.go:334] "Generic (PLEG): container finished" podID="2e7de0f7-9103-407d-8676-4512622ae53d" containerID="4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94" exitCode=1 Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729663 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"2e7de0f7-9103-407d-8676-4512622ae53d","Type":"ContainerDied","Data":"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94"} Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729688 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"2e7de0f7-9103-407d-8676-4512622ae53d","Type":"ContainerDied","Data":"857f97cd8bd0c26517c7d468ef81d7c4feaebb2b98fdf26ee749f8da1e5c515e"} Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729703 4786 scope.go:117] "RemoveContainer" containerID="4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.729811 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.751708 4786 scope.go:117] "RemoveContainer" containerID="4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94" Jan 27 00:18:07 crc kubenswrapper[4786]: E0127 00:18:07.752363 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94\": container with ID starting with 4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94 not found: ID does not exist" containerID="4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.752405 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94"} err="failed to get container status \"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94\": rpc error: code = NotFound desc = could not find container \"4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94\": container with ID starting with 4492b8b26d9ed303158dcaacae24180fc8f6eef9c1b7042d2fd318f58e468d94 not found: ID does not exist" Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.771103 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:07 crc kubenswrapper[4786]: I0127 00:18:07.779343 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 27 00:18:09 crc kubenswrapper[4786]: I0127 00:18:09.161635 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7de0f7-9103-407d-8676-4512622ae53d" path="/var/lib/kubelet/pods/2e7de0f7-9103-407d-8676-4512622ae53d/volumes" Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.344731 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.345379 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.345451 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.346372 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.346463 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c" gracePeriod=600 Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.821794 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c" exitCode=0 Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.821858 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c"} Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.822204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485"} Jan 27 00:18:20 crc kubenswrapper[4786]: I0127 00:18:20.822232 4786 scope.go:117] "RemoveContainer" containerID="7dfb056b397d38c00bd667f8defe4c5e2a5848c258f323c8c2534ff23e4ebb37" Jan 27 00:18:27 crc kubenswrapper[4786]: I0127 00:18:27.995318 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.076483 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sj5df/must-gather-z26ws"] Jan 27 00:18:55 crc kubenswrapper[4786]: E0127 00:18:55.077460 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7de0f7-9103-407d-8676-4512622ae53d" containerName="git-clone" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.077484 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7de0f7-9103-407d-8676-4512622ae53d" containerName="git-clone" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.077665 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7de0f7-9103-407d-8676-4512622ae53d" containerName="git-clone" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.078563 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.084155 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sj5df"/"openshift-service-ca.crt" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.084624 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sj5df"/"kube-root-ca.crt" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.112986 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sj5df/must-gather-z26ws"] Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.189615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmc9\" (UniqueName: \"kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.189677 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.291274 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqmc9\" (UniqueName: \"kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.291335 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.292056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.320239 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqmc9\" (UniqueName: \"kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9\") pod \"must-gather-z26ws\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.402972 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:18:55 crc kubenswrapper[4786]: I0127 00:18:55.810060 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sj5df/must-gather-z26ws"] Jan 27 00:18:56 crc kubenswrapper[4786]: I0127 00:18:56.093497 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sj5df/must-gather-z26ws" event={"ID":"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c","Type":"ContainerStarted","Data":"847b143d077884d82ba2454a01e2f2ca93c9bf4fc379b4ffd2a58a18c612a877"} Jan 27 00:19:03 crc kubenswrapper[4786]: I0127 00:19:03.166958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sj5df/must-gather-z26ws" event={"ID":"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c","Type":"ContainerStarted","Data":"82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b"} Jan 27 00:19:03 crc kubenswrapper[4786]: I0127 00:19:03.168551 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sj5df/must-gather-z26ws" event={"ID":"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c","Type":"ContainerStarted","Data":"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1"} Jan 27 00:19:43 crc kubenswrapper[4786]: I0127 00:19:43.493895 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bvxx5_f68d0d18-9904-432a-bcf6-8791e8a2fee0/control-plane-machine-set-operator/0.log" Jan 27 00:19:43 crc kubenswrapper[4786]: I0127 00:19:43.557145 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5zhf8_0c69ee0f-6cee-421a-a605-ec0b946a22c6/kube-rbac-proxy/0.log" Jan 27 00:19:43 crc kubenswrapper[4786]: I0127 00:19:43.627876 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5zhf8_0c69ee0f-6cee-421a-a605-ec0b946a22c6/machine-api-operator/0.log" Jan 27 00:19:56 crc kubenswrapper[4786]: I0127 00:19:56.191786 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-vnqb7_71612731-4e8e-4fd0-bce4-a2c7b9bd752a/cert-manager-controller/0.log" Jan 27 00:19:56 crc kubenswrapper[4786]: I0127 00:19:56.236474 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-zhmht_f31f927e-7e82-4072-a4de-2e312a541c86/cert-manager-cainjector/0.log" Jan 27 00:19:56 crc kubenswrapper[4786]: I0127 00:19:56.352939 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-hdqcp_5c20001c-cb4a-4cf6-8102-0273ce0f949a/cert-manager-webhook/0.log" Jan 27 00:20:11 crc kubenswrapper[4786]: I0127 00:20:11.399361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zbc54_e7226096-745e-4703-ace5-c936b6253c6b/prometheus-operator/0.log" Jan 27 00:20:11 crc kubenswrapper[4786]: I0127 00:20:11.442425 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77_a496e93b-aa8f-4501-ae30-38c707fb2367/prometheus-operator-admission-webhook/0.log" Jan 27 00:20:11 crc kubenswrapper[4786]: I0127 00:20:11.583329 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx_9fd33933-0292-4388-893a-e091b8f50b5e/prometheus-operator-admission-webhook/0.log" Jan 27 00:20:11 crc kubenswrapper[4786]: I0127 00:20:11.627672 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-99tbv_c0ef0efb-5797-4b7f-a69c-8177fe9a148b/operator/0.log" Jan 27 00:20:11 crc kubenswrapper[4786]: I0127 00:20:11.790342 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k92dn_a1319c4e-44c7-4d01-a0b9-a79f7290e182/perses-operator/0.log" Jan 27 00:20:20 crc kubenswrapper[4786]: I0127 00:20:20.344624 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:20:20 crc kubenswrapper[4786]: I0127 00:20:20.345234 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:20:24 crc kubenswrapper[4786]: I0127 00:20:24.957913 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.058789 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.136245 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.175377 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.307122 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.361839 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.374737 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahsl9b_ee3c26cf-dfd2-47d5-85e9-9ec9e33478be/extract/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.527610 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.696068 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.701525 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.725720 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.879146 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/pull/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.906381 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/util/0.log" Jan 27 00:20:25 crc kubenswrapper[4786]: I0127 00:20:25.936635 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmjxr6_cb648cac-9420-4f49-bacd-96220d1d201c/extract/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.055060 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.226165 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/pull/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.226664 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.241089 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/pull/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.426971 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/pull/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.444701 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.486920 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ekfv5x_bcb2611d-11f3-4185-bea4-f995f1378bac/extract/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.584159 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.757952 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/pull/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.774071 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/pull/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.784097 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.920402 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/util/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.940867 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/extract/0.log" Jan 27 00:20:26 crc kubenswrapper[4786]: I0127 00:20:26.953115 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hdbx8_dbb96edc-2ef9-4042-9392-a4172d877678/pull/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.094046 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-utilities/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.270513 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-utilities/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.276425 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-content/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.287129 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-content/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.424796 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-utilities/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.451204 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/extract-content/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.619668 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-utilities/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.641131 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5dj7f_6adbf4f2-5d00-4957-b9c0-7a68e3a5d184/registry-server/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.824762 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-content/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.825689 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-utilities/0.log" Jan 27 00:20:27 crc kubenswrapper[4786]: I0127 00:20:27.834014 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-content/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.005296 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-utilities/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.008321 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/extract-content/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.147009 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qrcmh_321f7da5-39cd-4cd5-a102-4ea98ed4a6c2/registry-server/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.214504 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-znvgc_d913b840-845c-4058-a37b-483f582f6ec6/marketplace-operator/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.271824 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-utilities/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.471038 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-content/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.481649 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-content/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.497840 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-utilities/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.648604 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-utilities/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.665903 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/extract-content/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.816391 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78lb5_cd63f298-237f-4100-8f4e-9838b123763f/registry-server/0.log" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.914001 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sj5df/must-gather-z26ws" podStartSLOduration=87.711112238 podStartE2EDuration="1m33.913972618s" podCreationTimestamp="2026-01-27 00:18:55 +0000 UTC" firstStartedPulling="2026-01-27 00:18:55.820019062 +0000 UTC m=+781.303706105" lastFinishedPulling="2026-01-27 00:19:02.022879402 +0000 UTC m=+787.506566485" observedRunningTime="2026-01-27 00:19:03.188802719 +0000 UTC m=+788.672489782" watchObservedRunningTime="2026-01-27 00:20:28.913972618 +0000 UTC m=+874.397659661" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.917828 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.919243 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:28 crc kubenswrapper[4786]: I0127 00:20:28.929911 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.075666 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.076302 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.076357 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2t4\" (UniqueName: \"kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.178272 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.178334 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.178367 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2t4\" (UniqueName: \"kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.178985 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.179031 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.195958 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2t4\" (UniqueName: \"kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4\") pod \"community-operators-gx6mq\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.240246 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:29 crc kubenswrapper[4786]: I0127 00:20:29.745856 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:30 crc kubenswrapper[4786]: I0127 00:20:30.695681 4786 generic.go:334] "Generic (PLEG): container finished" podID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerID="e8f02f007adf2b47dd1acd11a6e5fb3f56871c0e86ba7161273f9c95b9027596" exitCode=0 Jan 27 00:20:30 crc kubenswrapper[4786]: I0127 00:20:30.695789 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerDied","Data":"e8f02f007adf2b47dd1acd11a6e5fb3f56871c0e86ba7161273f9c95b9027596"} Jan 27 00:20:30 crc kubenswrapper[4786]: I0127 00:20:30.695992 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerStarted","Data":"31619febdd15433e223bef2d626f77f9ab6d2482ea82cc223c3fa09dd322c320"} Jan 27 00:20:31 crc kubenswrapper[4786]: I0127 00:20:31.704531 4786 generic.go:334] "Generic (PLEG): container finished" podID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerID="57d0b2165804cb0f0ab9d4e436178a09c65767720c78876d7b6c053901466f82" exitCode=0 Jan 27 00:20:31 crc kubenswrapper[4786]: I0127 00:20:31.704627 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerDied","Data":"57d0b2165804cb0f0ab9d4e436178a09c65767720c78876d7b6c053901466f82"} Jan 27 00:20:32 crc kubenswrapper[4786]: I0127 00:20:32.712785 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerStarted","Data":"9c317c6be1223c5d8fde56cb106bf42421b078803f9dd7daf2693e104c5bbbfe"} Jan 27 00:20:32 crc kubenswrapper[4786]: I0127 00:20:32.736861 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gx6mq" podStartSLOduration=3.337316537 podStartE2EDuration="4.736835609s" podCreationTimestamp="2026-01-27 00:20:28 +0000 UTC" firstStartedPulling="2026-01-27 00:20:30.697161245 +0000 UTC m=+876.180848288" lastFinishedPulling="2026-01-27 00:20:32.096680307 +0000 UTC m=+877.580367360" observedRunningTime="2026-01-27 00:20:32.731901092 +0000 UTC m=+878.215588135" watchObservedRunningTime="2026-01-27 00:20:32.736835609 +0000 UTC m=+878.220522652" Jan 27 00:20:39 crc kubenswrapper[4786]: I0127 00:20:39.240944 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:39 crc kubenswrapper[4786]: I0127 00:20:39.241563 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:39 crc kubenswrapper[4786]: I0127 00:20:39.276920 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:39 crc kubenswrapper[4786]: I0127 00:20:39.804030 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:39 crc kubenswrapper[4786]: I0127 00:20:39.850610 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:40 crc kubenswrapper[4786]: I0127 00:20:40.792186 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c786d5c79-5tb77_a496e93b-aa8f-4501-ae30-38c707fb2367/prometheus-operator-admission-webhook/0.log" Jan 27 00:20:40 crc kubenswrapper[4786]: I0127 00:20:40.804558 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c786d5c79-dzngx_9fd33933-0292-4388-893a-e091b8f50b5e/prometheus-operator-admission-webhook/0.log" Jan 27 00:20:40 crc kubenswrapper[4786]: I0127 00:20:40.822337 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zbc54_e7226096-745e-4703-ace5-c936b6253c6b/prometheus-operator/0.log" Jan 27 00:20:40 crc kubenswrapper[4786]: I0127 00:20:40.945024 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-99tbv_c0ef0efb-5797-4b7f-a69c-8177fe9a148b/operator/0.log" Jan 27 00:20:40 crc kubenswrapper[4786]: I0127 00:20:40.978341 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k92dn_a1319c4e-44c7-4d01-a0b9-a79f7290e182/perses-operator/0.log" Jan 27 00:20:41 crc kubenswrapper[4786]: I0127 00:20:41.774054 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gx6mq" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="registry-server" containerID="cri-o://9c317c6be1223c5d8fde56cb106bf42421b078803f9dd7daf2693e104c5bbbfe" gracePeriod=2 Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.782138 4786 generic.go:334] "Generic (PLEG): container finished" podID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerID="9c317c6be1223c5d8fde56cb106bf42421b078803f9dd7daf2693e104c5bbbfe" exitCode=0 Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.782287 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerDied","Data":"9c317c6be1223c5d8fde56cb106bf42421b078803f9dd7daf2693e104c5bbbfe"} Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.842343 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.967692 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content\") pod \"17013867-a621-4cc7-9a8a-d33c33e67e10\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.967922 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk2t4\" (UniqueName: \"kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4\") pod \"17013867-a621-4cc7-9a8a-d33c33e67e10\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.967956 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities\") pod \"17013867-a621-4cc7-9a8a-d33c33e67e10\" (UID: \"17013867-a621-4cc7-9a8a-d33c33e67e10\") " Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.968893 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities" (OuterVolumeSpecName: "utilities") pod "17013867-a621-4cc7-9a8a-d33c33e67e10" (UID: "17013867-a621-4cc7-9a8a-d33c33e67e10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:20:42 crc kubenswrapper[4786]: I0127 00:20:42.984351 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4" (OuterVolumeSpecName: "kube-api-access-qk2t4") pod "17013867-a621-4cc7-9a8a-d33c33e67e10" (UID: "17013867-a621-4cc7-9a8a-d33c33e67e10"). InnerVolumeSpecName "kube-api-access-qk2t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.020386 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17013867-a621-4cc7-9a8a-d33c33e67e10" (UID: "17013867-a621-4cc7-9a8a-d33c33e67e10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.069848 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.069898 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk2t4\" (UniqueName: \"kubernetes.io/projected/17013867-a621-4cc7-9a8a-d33c33e67e10-kube-api-access-qk2t4\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.069917 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17013867-a621-4cc7-9a8a-d33c33e67e10-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.789920 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx6mq" event={"ID":"17013867-a621-4cc7-9a8a-d33c33e67e10","Type":"ContainerDied","Data":"31619febdd15433e223bef2d626f77f9ab6d2482ea82cc223c3fa09dd322c320"} Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.789982 4786 scope.go:117] "RemoveContainer" containerID="9c317c6be1223c5d8fde56cb106bf42421b078803f9dd7daf2693e104c5bbbfe" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.789979 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx6mq" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.812589 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.816496 4786 scope.go:117] "RemoveContainer" containerID="57d0b2165804cb0f0ab9d4e436178a09c65767720c78876d7b6c053901466f82" Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.818704 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gx6mq"] Jan 27 00:20:43 crc kubenswrapper[4786]: I0127 00:20:43.832900 4786 scope.go:117] "RemoveContainer" containerID="e8f02f007adf2b47dd1acd11a6e5fb3f56871c0e86ba7161273f9c95b9027596" Jan 27 00:20:45 crc kubenswrapper[4786]: I0127 00:20:45.157418 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" path="/var/lib/kubelet/pods/17013867-a621-4cc7-9a8a-d33c33e67e10/volumes" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.016806 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:46 crc kubenswrapper[4786]: E0127 00:20:46.017404 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="registry-server" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.017418 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="registry-server" Jan 27 00:20:46 crc kubenswrapper[4786]: E0127 00:20:46.017437 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="extract-utilities" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.017446 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="extract-utilities" Jan 27 00:20:46 crc kubenswrapper[4786]: E0127 00:20:46.017462 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="extract-content" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.017471 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="extract-content" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.017616 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="17013867-a621-4cc7-9a8a-d33c33e67e10" containerName="registry-server" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.018623 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.044472 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.119603 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.119682 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6qz\" (UniqueName: \"kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.119744 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.220721 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.220783 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.220822 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr6qz\" (UniqueName: \"kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.221383 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.221676 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.242801 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr6qz\" (UniqueName: \"kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz\") pod \"certified-operators-bh92l\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.337160 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.581733 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.813092 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerID="4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150" exitCode=0 Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.813162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerDied","Data":"4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150"} Jan 27 00:20:46 crc kubenswrapper[4786]: I0127 00:20:46.813191 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerStarted","Data":"2d29da0606e6006e9b66eacc47700efa8c2909c8b0e072a434eb64077149fd2f"} Jan 27 00:20:47 crc kubenswrapper[4786]: I0127 00:20:47.821712 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerID="af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0" exitCode=0 Jan 27 00:20:47 crc kubenswrapper[4786]: I0127 00:20:47.821813 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerDied","Data":"af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0"} Jan 27 00:20:48 crc kubenswrapper[4786]: I0127 00:20:48.830430 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerStarted","Data":"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1"} Jan 27 00:20:48 crc kubenswrapper[4786]: I0127 00:20:48.857816 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bh92l" podStartSLOduration=2.199889283 podStartE2EDuration="3.857793123s" podCreationTimestamp="2026-01-27 00:20:45 +0000 UTC" firstStartedPulling="2026-01-27 00:20:46.814846389 +0000 UTC m=+892.298533432" lastFinishedPulling="2026-01-27 00:20:48.472750209 +0000 UTC m=+893.956437272" observedRunningTime="2026-01-27 00:20:48.85336237 +0000 UTC m=+894.337049413" watchObservedRunningTime="2026-01-27 00:20:48.857793123 +0000 UTC m=+894.341480176" Jan 27 00:20:50 crc kubenswrapper[4786]: I0127 00:20:50.344882 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:20:50 crc kubenswrapper[4786]: I0127 00:20:50.345206 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:20:56 crc kubenswrapper[4786]: I0127 00:20:56.337606 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:56 crc kubenswrapper[4786]: I0127 00:20:56.338003 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:56 crc kubenswrapper[4786]: I0127 00:20:56.392111 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:56 crc kubenswrapper[4786]: I0127 00:20:56.936483 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:56 crc kubenswrapper[4786]: I0127 00:20:56.999065 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:58 crc kubenswrapper[4786]: I0127 00:20:58.910271 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bh92l" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="registry-server" containerID="cri-o://6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1" gracePeriod=2 Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.329908 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.413875 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities\") pod \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.413970 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr6qz\" (UniqueName: \"kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz\") pod \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.414089 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content\") pod \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\" (UID: \"5b3c1c1b-940c-4791-80d2-a398b38ecb7f\") " Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.414753 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities" (OuterVolumeSpecName: "utilities") pod "5b3c1c1b-940c-4791-80d2-a398b38ecb7f" (UID: "5b3c1c1b-940c-4791-80d2-a398b38ecb7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.422171 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz" (OuterVolumeSpecName: "kube-api-access-vr6qz") pod "5b3c1c1b-940c-4791-80d2-a398b38ecb7f" (UID: "5b3c1c1b-940c-4791-80d2-a398b38ecb7f"). InnerVolumeSpecName "kube-api-access-vr6qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.456200 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b3c1c1b-940c-4791-80d2-a398b38ecb7f" (UID: "5b3c1c1b-940c-4791-80d2-a398b38ecb7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.518361 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.518402 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.518422 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr6qz\" (UniqueName: \"kubernetes.io/projected/5b3c1c1b-940c-4791-80d2-a398b38ecb7f-kube-api-access-vr6qz\") on node \"crc\" DevicePath \"\"" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.919522 4786 generic.go:334] "Generic (PLEG): container finished" podID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerID="6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1" exitCode=0 Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.919585 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerDied","Data":"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1"} Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.919599 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh92l" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.919624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh92l" event={"ID":"5b3c1c1b-940c-4791-80d2-a398b38ecb7f","Type":"ContainerDied","Data":"2d29da0606e6006e9b66eacc47700efa8c2909c8b0e072a434eb64077149fd2f"} Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.919643 4786 scope.go:117] "RemoveContainer" containerID="6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.940578 4786 scope.go:117] "RemoveContainer" containerID="af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0" Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.961106 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.966277 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bh92l"] Jan 27 00:20:59 crc kubenswrapper[4786]: I0127 00:20:59.983327 4786 scope.go:117] "RemoveContainer" containerID="4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.000052 4786 scope.go:117] "RemoveContainer" containerID="6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1" Jan 27 00:21:00 crc kubenswrapper[4786]: E0127 00:21:00.000654 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1\": container with ID starting with 6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1 not found: ID does not exist" containerID="6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.000694 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1"} err="failed to get container status \"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1\": rpc error: code = NotFound desc = could not find container \"6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1\": container with ID starting with 6a3b44ee9a21c2d018b1aedd77dfd148aa2944bbefefb8c4b4554053296013e1 not found: ID does not exist" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.000715 4786 scope.go:117] "RemoveContainer" containerID="af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0" Jan 27 00:21:00 crc kubenswrapper[4786]: E0127 00:21:00.001033 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0\": container with ID starting with af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0 not found: ID does not exist" containerID="af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.001065 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0"} err="failed to get container status \"af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0\": rpc error: code = NotFound desc = could not find container \"af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0\": container with ID starting with af94261c1752c9dc2440e49b955cae5d5c60897d776509e84676972d6d5f58e0 not found: ID does not exist" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.001083 4786 scope.go:117] "RemoveContainer" containerID="4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150" Jan 27 00:21:00 crc kubenswrapper[4786]: E0127 00:21:00.001540 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150\": container with ID starting with 4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150 not found: ID does not exist" containerID="4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150" Jan 27 00:21:00 crc kubenswrapper[4786]: I0127 00:21:00.002539 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150"} err="failed to get container status \"4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150\": rpc error: code = NotFound desc = could not find container \"4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150\": container with ID starting with 4ababa88ae652706888ba20d37c0021da940dc717b20ccd8fa6b07755de58150 not found: ID does not exist" Jan 27 00:21:01 crc kubenswrapper[4786]: I0127 00:21:01.155636 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" path="/var/lib/kubelet/pods/5b3c1c1b-940c-4791-80d2-a398b38ecb7f/volumes" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.791738 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:18 crc kubenswrapper[4786]: E0127 00:21:18.793020 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="extract-content" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.793050 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="extract-content" Jan 27 00:21:18 crc kubenswrapper[4786]: E0127 00:21:18.793109 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="registry-server" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.793127 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="registry-server" Jan 27 00:21:18 crc kubenswrapper[4786]: E0127 00:21:18.793152 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="extract-utilities" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.793171 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="extract-utilities" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.793436 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3c1c1b-940c-4791-80d2-a398b38ecb7f" containerName="registry-server" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.795520 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.798165 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.916966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.917098 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:18 crc kubenswrapper[4786]: I0127 00:21:18.917143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56nm\" (UniqueName: \"kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.018024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.018108 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.018143 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f56nm\" (UniqueName: \"kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.018507 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.018803 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.047855 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f56nm\" (UniqueName: \"kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm\") pod \"redhat-operators-dl2fr\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.120783 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:19 crc kubenswrapper[4786]: I0127 00:21:19.380079 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.057326 4786 generic.go:334] "Generic (PLEG): container finished" podID="3fe50601-8286-40aa-a3ae-503982a95fdf" containerID="c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066" exitCode=0 Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.057552 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerDied","Data":"c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066"} Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.057592 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerStarted","Data":"715bcacb9dac3f748e04761d21722a786d26713bc633cd1f21f1d58efd0915b6"} Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.059038 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.344436 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.344495 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.344544 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.345056 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:21:20 crc kubenswrapper[4786]: I0127 00:21:20.345098 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485" gracePeriod=600 Jan 27 00:21:21 crc kubenswrapper[4786]: I0127 00:21:21.066302 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485" exitCode=0 Jan 27 00:21:21 crc kubenswrapper[4786]: I0127 00:21:21.066442 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485"} Jan 27 00:21:21 crc kubenswrapper[4786]: I0127 00:21:21.066942 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"527013f0d18f2bad33ffc35add44884ed753382c46018cda507c08ffaccf2a33"} Jan 27 00:21:21 crc kubenswrapper[4786]: I0127 00:21:21.066969 4786 scope.go:117] "RemoveContainer" containerID="14fb80b9117ac7972a4013278b4ba11b6f12c605d87da0084bf8f5f83523684c" Jan 27 00:21:22 crc kubenswrapper[4786]: I0127 00:21:22.078903 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerStarted","Data":"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797"} Jan 27 00:21:23 crc kubenswrapper[4786]: I0127 00:21:23.088443 4786 generic.go:334] "Generic (PLEG): container finished" podID="3fe50601-8286-40aa-a3ae-503982a95fdf" containerID="438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797" exitCode=0 Jan 27 00:21:23 crc kubenswrapper[4786]: I0127 00:21:23.088796 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerDied","Data":"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797"} Jan 27 00:21:24 crc kubenswrapper[4786]: I0127 00:21:24.098913 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerStarted","Data":"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e"} Jan 27 00:21:24 crc kubenswrapper[4786]: I0127 00:21:24.126598 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dl2fr" podStartSLOduration=2.580730306 podStartE2EDuration="6.126582365s" podCreationTimestamp="2026-01-27 00:21:18 +0000 UTC" firstStartedPulling="2026-01-27 00:21:20.058852476 +0000 UTC m=+925.542539519" lastFinishedPulling="2026-01-27 00:21:23.604704495 +0000 UTC m=+929.088391578" observedRunningTime="2026-01-27 00:21:24.125094625 +0000 UTC m=+929.608781698" watchObservedRunningTime="2026-01-27 00:21:24.126582365 +0000 UTC m=+929.610269408" Jan 27 00:21:29 crc kubenswrapper[4786]: I0127 00:21:29.122031 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:29 crc kubenswrapper[4786]: I0127 00:21:29.123124 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:30 crc kubenswrapper[4786]: I0127 00:21:30.170839 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dl2fr" podUID="3fe50601-8286-40aa-a3ae-503982a95fdf" containerName="registry-server" probeResult="failure" output=< Jan 27 00:21:30 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 00:21:30 crc kubenswrapper[4786]: > Jan 27 00:21:34 crc kubenswrapper[4786]: I0127 00:21:34.188052 4786 generic.go:334] "Generic (PLEG): container finished" podID="874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" containerID="eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1" exitCode=0 Jan 27 00:21:34 crc kubenswrapper[4786]: I0127 00:21:34.188409 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sj5df/must-gather-z26ws" event={"ID":"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c","Type":"ContainerDied","Data":"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1"} Jan 27 00:21:34 crc kubenswrapper[4786]: I0127 00:21:34.189402 4786 scope.go:117] "RemoveContainer" containerID="eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1" Jan 27 00:21:34 crc kubenswrapper[4786]: I0127 00:21:34.942748 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sj5df_must-gather-z26ws_874adbc9-b91d-49d8-8bfa-e1ff3fdac30c/gather/0.log" Jan 27 00:21:39 crc kubenswrapper[4786]: I0127 00:21:39.174999 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:39 crc kubenswrapper[4786]: I0127 00:21:39.218705 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:39 crc kubenswrapper[4786]: I0127 00:21:39.416457 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.224655 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dl2fr" podUID="3fe50601-8286-40aa-a3ae-503982a95fdf" containerName="registry-server" containerID="cri-o://5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e" gracePeriod=2 Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.578471 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.710242 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content\") pod \"3fe50601-8286-40aa-a3ae-503982a95fdf\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.710340 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f56nm\" (UniqueName: \"kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm\") pod \"3fe50601-8286-40aa-a3ae-503982a95fdf\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.710488 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities\") pod \"3fe50601-8286-40aa-a3ae-503982a95fdf\" (UID: \"3fe50601-8286-40aa-a3ae-503982a95fdf\") " Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.712186 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities" (OuterVolumeSpecName: "utilities") pod "3fe50601-8286-40aa-a3ae-503982a95fdf" (UID: "3fe50601-8286-40aa-a3ae-503982a95fdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.716928 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm" (OuterVolumeSpecName: "kube-api-access-f56nm") pod "3fe50601-8286-40aa-a3ae-503982a95fdf" (UID: "3fe50601-8286-40aa-a3ae-503982a95fdf"). InnerVolumeSpecName "kube-api-access-f56nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.812726 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f56nm\" (UniqueName: \"kubernetes.io/projected/3fe50601-8286-40aa-a3ae-503982a95fdf-kube-api-access-f56nm\") on node \"crc\" DevicePath \"\"" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.812786 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.872530 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fe50601-8286-40aa-a3ae-503982a95fdf" (UID: "3fe50601-8286-40aa-a3ae-503982a95fdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:21:40 crc kubenswrapper[4786]: I0127 00:21:40.914402 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe50601-8286-40aa-a3ae-503982a95fdf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.242761 4786 generic.go:334] "Generic (PLEG): container finished" podID="3fe50601-8286-40aa-a3ae-503982a95fdf" containerID="5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e" exitCode=0 Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.242829 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerDied","Data":"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e"} Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.242878 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dl2fr" event={"ID":"3fe50601-8286-40aa-a3ae-503982a95fdf","Type":"ContainerDied","Data":"715bcacb9dac3f748e04761d21722a786d26713bc633cd1f21f1d58efd0915b6"} Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.242898 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dl2fr" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.242922 4786 scope.go:117] "RemoveContainer" containerID="5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.268632 4786 scope.go:117] "RemoveContainer" containerID="438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.274303 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.281701 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dl2fr"] Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.295720 4786 scope.go:117] "RemoveContainer" containerID="c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.322077 4786 scope.go:117] "RemoveContainer" containerID="5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e" Jan 27 00:21:41 crc kubenswrapper[4786]: E0127 00:21:41.322704 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e\": container with ID starting with 5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e not found: ID does not exist" containerID="5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.322772 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e"} err="failed to get container status \"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e\": rpc error: code = NotFound desc = could not find container \"5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e\": container with ID starting with 5da5a03990e1d23455a6fee990a6c6ea41463618f4a073bf7e8f6d7b796fdc8e not found: ID does not exist" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.322815 4786 scope.go:117] "RemoveContainer" containerID="438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797" Jan 27 00:21:41 crc kubenswrapper[4786]: E0127 00:21:41.323268 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797\": container with ID starting with 438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797 not found: ID does not exist" containerID="438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.323319 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797"} err="failed to get container status \"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797\": rpc error: code = NotFound desc = could not find container \"438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797\": container with ID starting with 438c09520523063f9dc54c9a3e654d8e4c48fd100f11abc90fe643a2346ae797 not found: ID does not exist" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.323347 4786 scope.go:117] "RemoveContainer" containerID="c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066" Jan 27 00:21:41 crc kubenswrapper[4786]: E0127 00:21:41.323763 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066\": container with ID starting with c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066 not found: ID does not exist" containerID="c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.323815 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066"} err="failed to get container status \"c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066\": rpc error: code = NotFound desc = could not find container \"c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066\": container with ID starting with c950f8c07b2237036bacee4f5427c121818bf0fc0c7487c833537bd3d0dfe066 not found: ID does not exist" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.384817 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sj5df/must-gather-z26ws"] Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.385045 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sj5df/must-gather-z26ws" podUID="874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" containerName="copy" containerID="cri-o://82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b" gracePeriod=2 Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.389455 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sj5df/must-gather-z26ws"] Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.734075 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sj5df_must-gather-z26ws_874adbc9-b91d-49d8-8bfa-e1ff3fdac30c/copy/0.log" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.734892 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.831215 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output\") pod \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.831313 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqmc9\" (UniqueName: \"kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9\") pod \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\" (UID: \"874adbc9-b91d-49d8-8bfa-e1ff3fdac30c\") " Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.835028 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9" (OuterVolumeSpecName: "kube-api-access-fqmc9") pod "874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" (UID: "874adbc9-b91d-49d8-8bfa-e1ff3fdac30c"). InnerVolumeSpecName "kube-api-access-fqmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.899552 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" (UID: "874adbc9-b91d-49d8-8bfa-e1ff3fdac30c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.933186 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqmc9\" (UniqueName: \"kubernetes.io/projected/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-kube-api-access-fqmc9\") on node \"crc\" DevicePath \"\"" Jan 27 00:21:41 crc kubenswrapper[4786]: I0127 00:21:41.933223 4786 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.248976 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sj5df_must-gather-z26ws_874adbc9-b91d-49d8-8bfa-e1ff3fdac30c/copy/0.log" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.249446 4786 generic.go:334] "Generic (PLEG): container finished" podID="874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" containerID="82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b" exitCode=143 Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.249510 4786 scope.go:117] "RemoveContainer" containerID="82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.249513 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sj5df/must-gather-z26ws" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.266291 4786 scope.go:117] "RemoveContainer" containerID="eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.315159 4786 scope.go:117] "RemoveContainer" containerID="82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b" Jan 27 00:21:42 crc kubenswrapper[4786]: E0127 00:21:42.315639 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b\": container with ID starting with 82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b not found: ID does not exist" containerID="82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.315691 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b"} err="failed to get container status \"82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b\": rpc error: code = NotFound desc = could not find container \"82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b\": container with ID starting with 82fc853968bec1413b82fe15f173cb18c9b9b0363d0d3509f7a28ebb268c4e0b not found: ID does not exist" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.315715 4786 scope.go:117] "RemoveContainer" containerID="eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1" Jan 27 00:21:42 crc kubenswrapper[4786]: E0127 00:21:42.316035 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1\": container with ID starting with eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1 not found: ID does not exist" containerID="eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1" Jan 27 00:21:42 crc kubenswrapper[4786]: I0127 00:21:42.316079 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1"} err="failed to get container status \"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1\": rpc error: code = NotFound desc = could not find container \"eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1\": container with ID starting with eea2afebdc640fe060c9ec4171becad3810ab068acdc5ab02ce09c047b8b94d1 not found: ID does not exist" Jan 27 00:21:43 crc kubenswrapper[4786]: I0127 00:21:43.154848 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe50601-8286-40aa-a3ae-503982a95fdf" path="/var/lib/kubelet/pods/3fe50601-8286-40aa-a3ae-503982a95fdf/volumes" Jan 27 00:21:43 crc kubenswrapper[4786]: I0127 00:21:43.155586 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874adbc9-b91d-49d8-8bfa-e1ff3fdac30c" path="/var/lib/kubelet/pods/874adbc9-b91d-49d8-8bfa-e1ff3fdac30c/volumes" Jan 27 00:23:20 crc kubenswrapper[4786]: I0127 00:23:20.344699 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:23:20 crc kubenswrapper[4786]: I0127 00:23:20.345278 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:23:50 crc kubenswrapper[4786]: I0127 00:23:50.345194 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:23:50 crc kubenswrapper[4786]: I0127 00:23:50.346138 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:24:20 crc kubenswrapper[4786]: I0127 00:24:20.344683 4786 patch_prober.go:28] interesting pod/machine-config-daemon-87nzd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 00:24:20 crc kubenswrapper[4786]: I0127 00:24:20.345537 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 00:24:20 crc kubenswrapper[4786]: I0127 00:24:20.345639 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" Jan 27 00:24:20 crc kubenswrapper[4786]: I0127 00:24:20.346441 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"527013f0d18f2bad33ffc35add44884ed753382c46018cda507c08ffaccf2a33"} pod="openshift-machine-config-operator/machine-config-daemon-87nzd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 00:24:20 crc kubenswrapper[4786]: I0127 00:24:20.346534 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" podUID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerName="machine-config-daemon" containerID="cri-o://527013f0d18f2bad33ffc35add44884ed753382c46018cda507c08ffaccf2a33" gracePeriod=600 Jan 27 00:24:21 crc kubenswrapper[4786]: I0127 00:24:21.375909 4786 generic.go:334] "Generic (PLEG): container finished" podID="bcd24fc4-5ad4-4080-aa07-55552ab1e5e6" containerID="527013f0d18f2bad33ffc35add44884ed753382c46018cda507c08ffaccf2a33" exitCode=0 Jan 27 00:24:21 crc kubenswrapper[4786]: I0127 00:24:21.375996 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerDied","Data":"527013f0d18f2bad33ffc35add44884ed753382c46018cda507c08ffaccf2a33"} Jan 27 00:24:21 crc kubenswrapper[4786]: I0127 00:24:21.376246 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-87nzd" event={"ID":"bcd24fc4-5ad4-4080-aa07-55552ab1e5e6","Type":"ContainerStarted","Data":"d380b14de0b74ca3b56d9ebdaef363abcb2a60a736b3de5018405ce666a6b06d"} Jan 27 00:24:21 crc kubenswrapper[4786]: I0127 00:24:21.376269 4786 scope.go:117] "RemoveContainer" containerID="bf9a532dbc005302f9c28a326cdf02eea3076cd5fd1218f3a0fb5b351dc28485" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136003003024434 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136003004017352 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136000213016474 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136000214015445 5ustar corecore